This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:05.
A good friend of mine, back when we were students, studied film, among other things. We’d go to the movie theater frequently. At some point, he jokingly pointed out a useful guideline to me, that I found can be usefully applied way outside of movie theater visits.
So today I present this back to you, paraphrased, as Engesser’s Law:
“If you notice you’ll need to go to the bathroom before the movie ends, the best time to go is when you notice, as the movie will be more interesting at any later stage going forward.”
Because movies are built, obviously, with a dramatic arc that goes up and up and up. So, if you know it’s necessary, now’s better than later. Any later interruption will be disproportionately worse. (If it’s a good movie, that is.)
So I’m taking the liberty to add two corollaries and expand this from the world of movie theaters into everyday life, both work and personal:
First corollary: If you know any task will become urgent later, the best time to finish it is right now.
Second corollary: If you delay finishing a task until it becomes too urgent to further delay, you reduce your own agency and may have created avoidable additional damage to yourself and possibly others
I can’t help but thinking that so many of today’s debates – from climate change to smart city governance and AI ethics – are so much more connected than we give them credit for. I might be projecting, but in my mind they’re just variations of one simple theme:
Do we narrow or broaden the future options space? In others words, will we leave the next generation, the public sector, or the other people around us more options or less options? Do we give them agency or take it away? And how can it ever be ok to act in a way that takes away future generations’ options? That strips governments of their chances to deliver services to their citizens?
It’s essentially the Tragedy of the Commons as applied to the time axis: The Tragedy of Future Commons. And we can choose very deliberately to strengthen the commons (now and for the future), to strengthen future generations in the face of climate change (where we might have hit another tipping point), to strengthen city governments in their ability to govern and deliver services by not hollowing them out, etc. etc. etc..
What actions that requires of us depends heavily on context of course: AI to be made with more participation and civil society involved so as to mitigate risks. Smart cities to prioritize public ownership and accountability so the city doesn’t lose its influence to the private sector. Climate change to be at the top of all our priority lists in order to give our future selves and future generations more and better options to shape their world and thrive in it.
Too often we’re stuck in debates that are based, essentially, in yesterday’s world. We need to realize the situation we’re in so as to avoid false choices. It’s not “climate or business”, it’s “climate or no business”. It’s not “climate or civil rights”, but “climate or no civil rights”. Radical changes are coming our way, and I’d rather shape them with intention and some buffer to spare rather than see them imposed on us like gravity imposed on Newton’s fabled apple.
So let’s aim for the opposite of the Tragedy of the Commons, whatever that might be called. The Thriving of the Commons?
And if you need a framework that’s decidedly not made for this purpose but has been holding up nicely for me, look to the Vision for a Shared Digital Europe (SDE) for inspiration. It lays out 4 pillars that I find pretty appealing: Cultivate the Commons; Decentralize Infrastructure; Enable Self-Determination; Empower Public Institutions. The authors drafted it with the EU’s digital agenda in mind (I was a very minor contributor, joining at a later stage). But I think it can apply meaningfully to smart cities just as much as it does to AI development and climate change and other areas. (Feel free to hit up the team to see how they might apply to your context, or reach out to me and I’ll be happy to put you in touch.) Those are good principles!
Note: This piece is cross-posted from my weekly newsletter Connection Problem, to which you can sign up here.
Note: Every quarter or so I write our client newsletter. This time it touched on some aspects I figured might be useful to this larger audience, too, so I trust you’ll forgive me cross-posting this bit from the most recent newsletter.
Some questions I’ve been pondering and that we’ve been exploring in conversations with our peer group day in, day out.
This isn’t an exhaustive list, of course, but gives you a hint about my headspace?—?experience shows that this can serve as a solid early warning system for industry wide debates, too.
Questions we’ve had on our collective minds:
1. What’s the relationship between (digital) technology and ethics/sustainability?
There’s a major shift happening here, among consumers and industry, but I’m not yet 100% sure where we’ll end up. That’s a good thing, and makes for interesting questions. Excellent!
2. The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust.
Between all the insecurities and data leaks and bricked devices and “sunsetted” services and horror stories about hacked toys and routers and cameras and vibrators and what have you, I’m 100% convinced that consumer trust?—?and products’ trustworthiness?—?is the key to success for the next 5 years of IoT. (We’ve been doing lots of work in that space, and hope to continue to work on this in 2018.)
3. Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting?
It seems safe to assume that as deploying machine learning gets easier and cheaper every day we’ll see AI-like techniques thrown at every imaginable niche. Remember when everyone and their uncle had to have an app? It’s going to be like that but with AI. This is going to be interesting, and no doubt it’ll produce spectacular successes as well as fascinating failures.
4. What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go?
These are all interesting, deep topics to dig into. They’re all closely interrelated, too, and have implications on business, strategy, research, policy. We’ll continue to dig in.
But also, besides these larger, more complex questions there are smaller, more concrete things to explore:
What are new emerging technologies? Where are exciting new opportunities?
What will happen due to more ubiquitous autonomous vehicles, solar power, crypto currencies? What about LIDAR and Li-Fi?
How will the industry adapt to the European GDPR? Who will be the first players to turn data protection and scarcity into a strength, and score major wins? I’m convinced that going forward, consumer and data protection offer tremendous business opportunities.
If these themes resonate, or if you’re asking yourself “how can we get ahead in 2018 without compromising user rights”, let’s chat.
Want to work together? I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.
PS: I write another newsletter, too, in which I share regular project updates, thoughts on the most interesting articles I come across, and where I explore areas around tech, society, culture & business that I find relevant. To watch my thinking unfolding and maturing, this is for you. You can subscribe here.
At their Pixel 2 event at the beginning of the month, Google released a whole slew of new products. Besides new phones there were updated version of their smart home hub, Google Home, and some new types of product altogether.
I don’t usually write about product launches, but this event has me excited about new tech for the first time in a long time. Why? Because some aspects stood out as they stand for a larger shift in the industry: The new role of artificial intelligence (AI) as it seeps into consumer goods.
Google have been reframing themselves from a mobile first to an AI first company for the last year or so. (For full transparency I should add that I’ve worked with Google occasionally in the recent past, but everything discussed here is of course publicly available.)
We now see this shift of focus play out as it manifests in products.
Here’s Google CEO Sundar Pichai at the opening of Google’s Pixel 2 event:
We’re excited by the shift from a mobile-first to an AI-first world. It is not just about applying machine learning in our products, but it’s radically re-thinking how computing should work. (…) We’re really excited by this shift, and that’s why we’re here today. We’ve been working on software and hardware together because that’s the best way to drive the shifts in computing forward. But we think we’re in the unique moment in time where we can bring the unique combination of AI, and software, and hardware to bring the different perspective to solving problems for users. We’re very confident about our approach here because we’re at the forefront of driving the shifts with AI.
AI as a platform: Google has it.
First things first: I fully agree – there’s currently no other company that’s in as well positioned to drive the development of AI, or to benefit from it. In fact, back in May 2017 I wrote that “Google just won the next 10 years.” That was when Google just hinted at their capabilities in terms of new features, but also announced building AI infrastructure for third parties to use. AI as a platform: Google has it.
Before diving into some structural thoughts, let’s look at two specific products they launched:
Google Clips are a camera you can clip somewhere, and it’ll automatically take photos when some conditions are met: A certain person’s face is in the picture, or they are smiling. It’s an odd product for sure, but here’s the thing: It’s fully machine learning powered facial recognition, and the computing happens on the device. This is remarkable for its incredible technical achievement, and for its approach. Google has become a company of high centralization—the bane of cloud computing, I’d lament. Google Clips works at the edge, decentralized. This is powerful, and I hope it inspires a new generation of IoT products that embrace decentralization.
Google’s new in-ear headphones offer live translation. That’s right: These headphones should be able to allow for multi-language human-to-human live conversations. (This happens in the cloud, not locally.) Now how well this works in practice remains to be seen, and surely you wouldn’t want to run a work meeting through them. But even if it eases travel related helplessness just a bit it’d be a big deal.
So as we see these new products roll out, the actual potential becomes much more graspable. There’s a shape emerging from the fog: Google may not really be AI first just yet, but they certainly have made good progress on AI-leveraged services.
The mental model I’m using for how Apple and Google compare is this:
Apple’s ecosystem focuses on an integration: Hardware (phones, laptops) and software (OSX, iOS) are both highly integrated, and services are built on top. This allows for consistent service delivery and for pushing the limits of hardware and software alike, and most importantly for Apple’s bottom line allows to sell hardware that’s differentiated by software and services: Nobody else is allowed to make an iPhone.
Google started at the opposite side, with software (web search, then Android). Today, Google looks something like this:
Based on software (search/discovery, plus Android) now there’s also hardware that’s more integrated. Note that Android is still the biggest smartphone platform as well as basis for lots of connected products, so Google’s hardware isn’t the only game in town. How this works out with partners over time remains to be seen. That said, this new structure means Google can push its software capabilities to the limits through their own hardware (phones, smart home hubs, headphones, etc.) and then aim for the stars with AI-leveraged services in a way I don’t think we’ll see from competitors anytime soon.
What we’ve seen so far is the very tip of the iceberg: As Google keeps investing in AI and exploring the applications enabled by machine learning, this top layer should become exponentially more interesting: They develop not just the concrete services we see in action, but also use AI to build their new models, and open up AI as a service for other organizations. It’s a triple AI ecosystem play that should reinforce itself and hence gather more steam the more it’s used.
AI-driven automation will continue to create wealth and expand the American economy in the coming years, but, while many will benefit, that growth will not be costless and will be accompanied by changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.
This cuts right to the chase: Artificial intelligence (AI) will create wealth, and it will replace jobs. AI will change the future of work, and the economy.
AI will change the future of work, and the economy.
For the record: In other areas, Germany is making good progress. Take autonomous driving, for example. Germany just adopted an action plan on automated driving that regulates key points of how autonomous vehicles should behave on the street—and regulates it well! Key points include that autonomous driving is worth promoting because it causes fewer accidents, dictates that damage to property must take precedence over personal injury (aka life has priority), and that in unavoidable accident situations there may not be any discrimination between individuals based on age, gender, etc. It even includes data sovereignty for drivers. Well done!
On the other hand, for the Internet of Things (IoT) Germany squandered opportunities in that IoT is framed almost exclusively as industrial IoT under the banner of Industrie 4.0. This is understandable given Germany’s manufacturing-focused economy, but it excludes a huge amount of super interesting and promising IoT. It’s clearly the result of successful lobbying but at the expense at a more inclusive, diverse portfolio of opportunities.
So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.
So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.
Update: The Federal Ministry of Education and Research recently announced an initiative to explore AI: Plattform Lernende Systeme (“platform living systems”). Thanks to Christian Katzenbach for the pointer!
AI & the future of work
The White House AI report talks a lot about the future of work, and of employment specifically. This makes sense: It’s one of the key aspects of AI. (Some others are, I’d say, opportunity for the creation of wealth on one side and algorithmic discrimination on the other.)
How AI will impact the work force, the economy, and the role of the individual is something we can only speculate about today.
In a recent workshop with stipendiaries of the Heinrich-Böll-Foundation on the future of work we explored how digital, AI, IoT and adjacent technologies impact how we work, and how we think about work. It was super interesting to see this diverse group of very, very capable students and young professionals bang their heads against the complexities in this space. Their findings mirrored what experts across the field also have been finding: That there are no simple answers, and most likely we’ll see huge gains in some areas and huge losses in others.
Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity.
The one thing I’d say is a safe bet is this: Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity. In other words, some human workers will be super-powered by AI (and related technologies), whereas others will fall by the wayside.
Over on Ribbonfarm, Venkatesh Rao phrases this very elegantly: Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.” Which of course conjures to mind images of roboticized warehouses, like this one:
Just to be clear, this is a contemporary warehouse in China. Amazon runs similar operations. This isn’t the future, this is the well-established present.
Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.”
I’d like to stress that I don’t think a robot warehouse is inherently good or bad. It depends on the policies that make sure the humans in the picture do well.
Education is key
So where are we in Europe again? In Germany, we still try to define what IoT and AI means. In China it’s been happening for years.
This picture shows a smart lamp in Shenzhen that we found in a maker space:
What does the lamp do? It tracks if users are nearby, so it can switch itself off when nobody’s around. It automatically adjusts light the temperature depending on the light in the room. As smart lamps go, these features are okay: Not horrible, not interesting. If it came out of Samsung or LG or Amazon I wouldn’t be surprised.
So what makes it special? This smart lamp was built by a group of fifth graders. That’s right: Ten and eleven year olds designed, programmed, and built this. Because the curriculum for local students includes the skills that enable them to do this. In Europe, this is unheard of.
I think the gap in skills regarding artificial intelligence is most likely quite similar. And I’m not just talking about the average individual: I’m talking about readiness at the government level, too. Our governments aren’t ready for AI.
Our governments aren’t ready for AI.
It’s about time we start getting ready for AI, IoT, and robotics. Always a fast mover, Estonia considers a law to legalize AI, and they smartly kick off this process with a multi-stakeholder process.
What to do?
In Germany, the whole discussion is still in its earliest stages. Let’s not fall into the same trap as we did for IoT: Both IoT and AI are more than just industry. They are both broader and deeper than the adjective industrial implies.
The White House report can provide some inspiration, especially around education policy.
We need to invest in what OECD calls the active labor market policies, i.e. training and skill development for adults. We need to update our school curricula to get youths ready for the future with both hands-on applicable skills (coding, data analysis, etc.) and with the larger contextual meta skills to make smart decisions (think humanities, history, deep learning).
We need to reform immigration to allow for the best talent to come to Europe more easily (and allow for voting rights, too, because nobody feels at home where they pay taxes with no representation).
Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.
Zooming out to the really big picture, we need to start completely reforming our social security systems for an AI world that might not deliver full employment ever again. This could include Universal Basic Income, or maybe rather Universal Basic Services, or a different approach altogether.
This requires capacity building on the side of our government. Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.
But I know one thing: We need to kick off this process today.
We strongly believe that good ethics mean good business. This isn’t just an empty phrase, either: We know from our own experience that often it pays great dividends to go the extra step and taking into account the implications of business decisions.
This is especially true in areas that employ new technologies, simply because there are more unknowns in emerging tech. And more unknowns = higher risks.
Our field of operation is at the intersection of emerging tech, strategy, and good business ethics.
Take, for example, the global tech company’s VP who adapted community-driven guidelines for data ownership in IoT: He knew that this particular pioneer community had a deeper understanding than most of the issues at stake. Even though these data ownership guidelines meant possibly losing some short term revenue gains, he trusted in their long-term positive side effects. Now, and at the time unexpectedly, his organization is in a better position than most to comply with the new EU data protection regulation (GDPR). Even before that, these guidelines likely inspired user trust and confidence.
Other companies lose their best talents because of sketchy business tactics—to those who are honest and trustworthy, and have a credible and powerful mission.
If you pay attention you’ll find these examples everywhere: Good ethics aren’t a buzzword, nor are they rocket science. They’re 100% compatible with good business. They might just be a requisite.
Here at The Waving Cat, we’re in the business of analyzing the impact of emerging technologies and finding ways to harness their opportunities. This is why our services include both research & foresight and strategy: First we need to develop a deep understanding, then we can apply it. Analyze first, act second.
Over the last few years, my work has mostly homed in on the Internet of Things (#IoT). This is no coincidence: IoT is where a lot of emerging technologies converge. Hence, IoT has been a massive driver of digital transformation.
IoT has been a massive driver of digital transformation.
However, increasingly the lines between IoT and other emerging technologies are becoming ever-more blurry. Concretely, data-driven and algorithmic decision-making is taking on a life on its own, both within the confines of IoT and outside of them. Under the labels of machine learning (#ML), artificial intelligence (#AI), or the (now strangely old school moniker) big data we’ve seen tremendous development over the last few years.
The physical world is already suffused with data, sensors, and connected devices/systems, and we’re only at the beginning of this development. Years ago I curated a track at NEXT Conference called the Data Layer, on the premise that the physical world will be covered in a data layer. Now, 5 years or so later, this reality has absolutely come to pass.
IoT with its connected devices, smart cities, connected homes, and connected mobility is part of that global infrastructure. No matter if the data crunching happens in the cloud or at the edge (i.e. close to where the data is captured/used), more and more has to happen implicitly and autonomously. Machine learning and AI play an essential role in this.
Increasingly, artificial intelligence is becoming a driver of digital transformation
Most organizations will need to develop an approach to harnessing artificial intelligence, and so increasingly artificial intelligence is becoming a driver of digital transformation.
As of today, Internet of Things, artificial intelligence & machine learning, and digital transformation are intimately connected. You can’t really get far in one without understanding the others.
These are exciting, interesting times, and they offer lots of opportunities. We’re here to help you figure out how to harness them.