It’s great to see that communication science/media studies tackle IoT and human-computer interfaces as a field of research. I was impressed with the level of thinking and questions from the group. The discussion was lively, on point, and there were none of the obvious questions. Instead, the students probed the pretty complex issues surrounding IoT, AI, and algorithmic decision making in the context of communications and communication science.
It’s part of the master program, and of Prof. Engesser’s new role as professor there, to also set up a lab to study how smart home assistants and other voice-enabled connected devices impact the way we communicate at home—both with other people and with machines.
It’ll be interesting to watch the lab’s progress and findings, and I hope we’ll find ways to collaborate on some of these questions.
AI-driven automation will continue to create wealth and expand the American economy in the coming years, but, while many will benefit, that growth will not be costless and will be accompanied by changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.
This cuts right to the chase: Artificial intelligence (AI) will create wealth, and it will replace jobs. AI will change the future of work, and the economy.
AI will change the future of work, and the economy.
For the record: In other areas, Germany is making good progress. Take autonomous driving, for example. Germany just adopted an action plan on automated driving that regulates key points of how autonomous vehicles should behave on the street—and regulates it well! Key points include that autonomous driving is worth promoting because it causes fewer accidents, dictates that damage to property must take precedence over personal injury (aka life has priority), and that in unavoidable accident situations there may not be any discrimination between individuals based on age, gender, etc. It even includes data sovereignty for drivers. Well done!
On the other hand, for the Internet of Things (IoT) Germany squandered opportunities in that IoT is framed almost exclusively as industrial IoT under the banner of Industrie 4.0. This is understandable given Germany’s manufacturing-focused economy, but it excludes a huge amount of super interesting and promising IoT. It’s clearly the result of successful lobbying but at the expense at a more inclusive, diverse portfolio of opportunities.
So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.
So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.
Update: The Federal Ministry of Education and Research recently announced an initiative to explore AI: Plattform Lernende Systeme (“platform living systems”). Thanks to Christian Katzenbach for the pointer!
AI & the future of work
The White House AI report talks a lot about the future of work, and of employment specifically. This makes sense: It’s one of the key aspects of AI. (Some others are, I’d say, opportunity for the creation of wealth on one side and algorithmic discrimination on the other.)
How AI will impact the work force, the economy, and the role of the individual is something we can only speculate about today.
In a recent workshop with stipendiaries of the Heinrich-Böll-Foundation on the future of work we explored how digital, AI, IoT and adjacent technologies impact how we work, and how we think about work. It was super interesting to see this diverse group of very, very capable students and young professionals bang their heads against the complexities in this space. Their findings mirrored what experts across the field also have been finding: That there are no simple answers, and most likely we’ll see huge gains in some areas and huge losses in others.
Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity.
The one thing I’d say is a safe bet is this: Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity. In other words, some human workers will be super-powered by AI (and related technologies), whereas others will fall by the wayside.
Over on Ribbonfarm, Venkatesh Rao phrases this very elegantly: Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.” Which of course conjures to mind images of roboticized warehouses, like this one:
Just to be clear, this is a contemporary warehouse in China. Amazon runs similar operations. This isn’t the future, this is the well-established present.
Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.”
I’d like to stress that I don’t think a robot warehouse is inherently good or bad. It depends on the policies that make sure the humans in the picture do well.
Education is key
So where are we in Europe again? In Germany, we still try to define what IoT and AI means. In China it’s been happening for years.
This picture shows a smart lamp in Shenzhen that we found in a maker space:
What does the lamp do? It tracks if users are nearby, so it can switch itself off when nobody’s around. It automatically adjusts light the temperature depending on the light in the room. As smart lamps go, these features are okay: Not horrible, not interesting. If it came out of Samsung or LG or Amazon I wouldn’t be surprised.
So what makes it special? This smart lamp was built by a group of fifth graders. That’s right: Ten and eleven year olds designed, programmed, and built this. Because the curriculum for local students includes the skills that enable them to do this. In Europe, this is unheard of.
I think the gap in skills regarding artificial intelligence is most likely quite similar. And I’m not just talking about the average individual: I’m talking about readiness at the government level, too. Our governments aren’t ready for AI.
Our governments aren’t ready for AI.
It’s about time we start getting ready for AI, IoT, and robotics. Always a fast mover, Estonia considers a law to legalize AI, and they smartly kick off this process with a multi-stakeholder process.
What to do?
In Germany, the whole discussion is still in its earliest stages. Let’s not fall into the same trap as we did for IoT: Both IoT and AI are more than just industry. They are both broader and deeper than the adjective industrial implies.
The White House report can provide some inspiration, especially around education policy.
We need to invest in what OECD calls the active labor market policies, i.e. training and skill development for adults. We need to update our school curricula to get youths ready for the future with both hands-on applicable skills (coding, data analysis, etc.) and with the larger contextual meta skills to make smart decisions (think humanities, history, deep learning).
We need to reform immigration to allow for the best talent to come to Europe more easily (and allow for voting rights, too, because nobody feels at home where they pay taxes with no representation).
Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.
Zooming out to the really big picture, we need to start completely reforming our social security systems for an AI world that might not deliver full employment ever again. This could include Universal Basic Income, or maybe rather Universal Basic Services, or a different approach altogether.
This requires capacity building on the side of our government. Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.
But I know one thing: We need to kick off this process today.
Here at The Waving Cat, we’re in the business of analyzing the impact of emerging technologies and finding ways to harness their opportunities. This is why our services include both research & foresight and strategy: First we need to develop a deep understanding, then we can apply it. Analyze first, act second.
Over the last few years, my work has mostly homed in on the Internet of Things (#IoT). This is no coincidence: IoT is where a lot of emerging technologies converge. Hence, IoT has been a massive driver of digital transformation.
IoT has been a massive driver of digital transformation.
However, increasingly the lines between IoT and other emerging technologies are becoming ever-more blurry. Concretely, data-driven and algorithmic decision-making is taking on a life on its own, both within the confines of IoT and outside of them. Under the labels of machine learning (#ML), artificial intelligence (#AI), or the (now strangely old school moniker) big data we’ve seen tremendous development over the last few years.
The physical world is already suffused with data, sensors, and connected devices/systems, and we’re only at the beginning of this development. Years ago I curated a track at NEXT Conference called the Data Layer, on the premise that the physical world will be covered in a data layer. Now, 5 years or so later, this reality has absolutely come to pass.
IoT with its connected devices, smart cities, connected homes, and connected mobility is part of that global infrastructure. No matter if the data crunching happens in the cloud or at the edge (i.e. close to where the data is captured/used), more and more has to happen implicitly and autonomously. Machine learning and AI play an essential role in this.
Increasingly, artificial intelligence is becoming a driver of digital transformation
Most organizations will need to develop an approach to harnessing artificial intelligence, and so increasingly artificial intelligence is becoming a driver of digital transformation.
As of today, Internet of Things, artificial intelligence & machine learning, and digital transformation are intimately connected. You can’t really get far in one without understanding the others.
These are exciting, interesting times, and they offer lots of opportunities. We’re here to help you figure out how to harness them.
Summary: For Mozilla, we explored the potentials and challenges of a trustmark for the Internet of Things (IoT). That research is now publicly available. You can find more background and all the relevant links at thewavingcat.com/iot-trustmark
If you follow our work both over at ThingsCon and here at The Waving Cat, you know that we see lots of potential for the Internet of Things (IoT) to create value and improve lives, but also some serious challenges. One of the core challenges is that it’s hard for consumers to figure out which IoT products and services are good—which ones are designed responsibly, which ones deserve their trust. After all, too often IoT devices are essentially black boxes that are hard interrogate and that might change with the next over-the-air software update.
So, what to do? One concept I’ve grown increasingly fond of is consumer labeling as we know from food, textiles, and other areas. But for IoT, that’s not simple. The networked, data-driven, and dynamic nature of IoT means that the complexity is high, and even seemingly simple questions can lead to surprisingly complex answers. Still, I think there’s huge potential there to make huge impact.
I was very happy when Mozilla picked up on that idea and commissioned us to explore the potential of consumer labels. Mozilla just made that report publicly available:
I’m excited to see where Mozilla might take the IoT trustmark and hope we can continue to explore this topic.
Increasingly, in order to have agency over their lives, users need to be able to make informed decisions about the IoT devices they invite into their lives. A trustmark for IoT can significantly empower users to do just that.
Also, I’d like to extend a big thank you! to the experts whose insights contributed to this reports through conversations online and offline, public and in private:
Alaisdair Allan (freelance consultant and author), Alexandra Deschamps-Sonsino (Designswarm, IoT London, #iotmark), Ame Elliott (Simply Secure), Boris Adryan (Zu?hlke Engineering), Claire Rowland (UX designer and author), David Ascher, David Li (Shenzhen Open Innovation Lab), Dries de Roeck (Studio Dott), Emma Lilliestam (Security researcher), Geoffrey MacDougall (Consumer Reports), Ge?rald Santucci (European Commission), Holly Robbins (Just Things Foundation), Iskander Smit (info.nl, Just Things Foundation), Jan-Peter Kleinhans (Stiftung Neue Verantwortung), Jason Schultz (NYU), Jeff Katz (Geeny), Jon Rogers (Mozilla Open IoT Studio), Laura James (Doteveryone, Digital Life Collective), Malavika Jayaram (Berkman Klein Center, Digital Asia Hub), Marcel Schouwenaar (Just Things Foundation, The Incredible Machine), Matt Biddulph (Thington), Michelle Thorne (Mozilla Open IoT Studio), Max Kru?ger (ThingsCon), Ronaldo Lemos (ITS Rio), Rosie Burbidge (Fox Williams), Simon Ho?her (ThingsCon), Solana Larsen (Mozilla), Stefan Ferber (Bosch Software Innovation), Thomas Amberg (Yaler), Ugo Vallauri (The Restart Project), Usman Haque (Thingful, #iotmark). Also and especially I’d like to thank the larger ThingsCon and London #iotmark communities for sharing their insights.
For a while we’ve been debating the ethics of algorithms, especially in the context of autonomous vehicles: What should happen, when something goes wrong? Who/what does the robo car protect? Who’s liable for damage if a crash occurs?
Germany, which has a strategy in place to become not just a world-leading manufacturer of autonomous vehicles but also a world-leading consumer market, just announced how to deal with these questions.
The Ethics Commission’s report comprises 20 propositions. The key elements are:
Automated and connected driving is an ethical imperative if the systems cause fewer accidents than human drivers (positive balance of risk).
Damage to property must take precedence over personal injury. In hazardous situations, the protection of human life must always have top priority.
In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible.
In every driving situation, it must be clearly regulated and apparent who is responsible for the driving task: the human or the computer.
It must be documented and stored who is driving (to resolve possible issues of liability, among other things).
Drivers must always be able to decide themselves whether their vehicle data are to be forwarded and used (data sovereignty).
The Federal Ministry of Transport and Digital Infrastructure’s Ethics Commission comprised 14 academics and experts from the disciplines of ethics, law and technology. Among these were transport experts, legal experts, information scientists, engineers, philosophers, theologians, consumer protection representatives as well as representatives of associations and companies.
Reading this, I have to say I’m relieved and impressed: These guidelines seem entirely reasonable, common sense, and practical. Especially the non-discrimination clause and the principle of data sovereignty is good to see included in this. Well done!
This bodes well for other areas where we haven’t seen this level of consideration from the German government yet, like smart cities and the super-set of #iot. I hope we’ll see similar findings and action plans in those areas soon, too.
Vom Hobby-Basteln bis hin zur Smart City: Das Internet of Things (#IoT) hat zunehmend Berührungspunkte mit allen Bereichen unseres Lebens. Aber wer bestimmt was erlaubt ist, was mit unseren Daten passiert, und ob es OK ist, unter die Haube zu gucken? IoT sitzt an der Schnittstelle vieler Technologie-, Governance- und Regulierungsbereiche—und schafft dadurch gleich eine ganze Reihe von Spannungsfeldern.
Due to technical issues with the video projection, my slides weren’t shown for the first few minutes. Apologies. On the plus side, the organizers had kindly put a waving cat on the podium for me. ?
It’s a rare talk in that I gave it in German, something I’m hardly used to these days.
In it, I argue that IoT poses a number of particular challenges that we need to address (incl. the level of complexity and blurred lines across disciplines and expertise; power dynamics; and transparency). I outline inherent tensions and propose a few approaches on how to tackle them, especially around increasing transparency and legibility of IoT products.
I conclude with a call for Europe to actively take a global leadership role in the area of consumer and data protection, analog to Silicon Valley’s (claimed/perceived) leadership in disruptive innovation as well as funding/scaling of digital products, and to Shenzhen’s hardware manufacturing leadership.
One of the key challenges for Internet of Things (IoT) in the consumer space boils down to expectation management: For consumers it’s unreasonably hard to know what to expect from any given IoT product/service.
This is also why we’ve been investigating potentials and challenges of IoT labels and are currently running a qualitative online survey—please share your thoughts! The resulting report will be published later this year.
I think the quadrant of questions anyone should be able to answer to a certain degree looks somewhat like this (still in draft stage):
Let’s go through the quadrants, counter clockwise starting at the top left:
Does it do what I expect it do do?
This should pretty straightforward for most products: Does the fitness tracker track my fitness? Does the connected fridge refrigerate? Etc.
Is the organization trustworthy?
This question is always a tough one, but it comes down to building, earning, and keeping the trust of your consumers and clients. This is traditionally the essence of brands.
Are the processes trustworthy?
The most tricky question, because usually internal processes are really hard, if not impossible, to interrogate. Companies could differentiate themselves in a positive way by being as transparent as possible.
Does it do anything I wouldn’t expect?
I believe this question is essential. Connected products often have features that may be unexpected to the layperson, sometimes because they are a technical requirement, sometimes because they are added later through a software update. Whatever the reason, an IoT device should never do anything that their users don’t have a reason to expect them to. As an extra toxic example, it seems unreasonable to expect that a smart TV would be always listening and sharing data with a cloud-service.
If these four bases are covered, I think that’s a good place to start.