Tagstrategy

Resilience, resilience, resilience

R

If we’ve learned anything over the past few months of lockdowns and the global corona-pandemic related slowing down of everything it’s this: Efficiency is brittle. Resilience is key.

For decades now — really at least since the first industrial revolution — the world has optimized for efficiency. Efficiency was the key to growth, which was the measure of societal success. Growth, after all, was pretty much considered the key proxy for wealth.

Now it’s become blatantly obvious that that was nonsense. Even if — and that’s a big if — it might have made more sense in the past.

(By the way, everything I’m writing here applies, I think, equally to politics and individual organizations. Just zoom in and out appropriately.)

As global economic activity and supply chains collapsed in tandem, we’ve seen that efficiency only leads to growth (and wealth) when things work very smoothly. And even then, this approach has been so exploitative of natural resources that we’ve pulled the existential rug out under our own feet by putting the world on a trajectory towards climate change and mass extinction so great as to completely screw up our future.

Increasingly, humans weren’t doing much better: The type of labor exploitation that powered this vast efficiency machine was working ever better for an ever smaller group of people (mostly along socio-economic lines, i.e. well-off to rich people) and ever worse for ever larger groups (the people doing the underlying physical work).

Clearly, something had to give. And the Covid-19 pandemic brought this out in a spotlight that was only possible because it all collapsed simultaneously around the globe.

So what now? Resilience and sustainability.

Decades of optimizing for efficiency and globalization have put us in a dead end. We know it simply cannot go on the way it has. If we rebuilt now as it was before, we’d be doing ourselves (and following generations) a huge disservice. We’d literally be rebuilding a broken machine rather than fixing it.

Instead, it’s become obvious that resilience and sustainability are the key aspects to optimize for.

Resilience introduces safeguards and buffers to respond to crises. Sustainability allows for things to go on indefinitely.

After all, that’s the core meaning of sustainability: Something that can be sustained for a long time, as opposed something that cannot. Growth as we know it can most definitively not.

Economist Vaclav Smil talks a lot about how much slack there is in the system, how much freedom we have to de-grow parts of our societies and economies without actually having to give up much of anything (if we do it quickly enough).

It is critical to realize and internalize:

  • There is slack in the system, we just need to not optimize it out of the system to squeeze a tiny bit more efficiency out of the system.
  • Efficiency is a dead end now: We’ve pushed past our available resources chasing it, now we need to course correct hard. We need to drop growth as a key metric to chase, and instead focus everything around resilience and sustainability.
  • We are currently still racing a metaphorical car into a metaphorical wall, and accelerating. There is only one possible outcome if we keep going, and it’s not desirable. (If you think in a Futures Cone, our probable outcomes are decidedly not our preferred outcomes.)

If we restructure — and I mean, fundamentally and completely restructure our economic models and societal definitions of value — around resilience and sustainability, there’s a change we can find our way to a world worth living in. If we do not, the only possible outcome is to crash & burn, and that is exactly where we’re headed.

If resilience & sustainability are the guiding principles, then the next crisis could be very bad, but the systems would have more slack, be more resilient: We’d all get better through any crisis than now, because we’d have optimized for that scenario.

Change is always easier to affect when things are in flux, during liminal states. The current phase of turmoil around Covid-19 is very much in flux, we’re in an extremely liminal state.

Let’s not waste this opportunity. It might just be the last chance we have.

Engesser’s Law

E

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:05.

A good friend of mine, back when we were students, studied film, among other things. We’d go to the movie theater frequently. At some point, he jokingly pointed out a useful guideline to me, that I found can be usefully applied way outside of movie theater visits.

So today I present this back to you, paraphrased, as Engesser’s Law:

“If you notice you’ll need to go to the bathroom before the movie ends, the best time to go is when you notice, as the movie will be more interesting at any later stage going forward.”

Because movies are built, obviously, with a dramatic arc that goes up and up and up. So, if you know it’s necessary, now’s better than later. Any later interruption will be disproportionately worse. (If it’s a good movie, that is.)

So I’m taking the liberty to add two corollaries and expand this from the world of movie theaters into everyday life, both work and personal:

First corollary: If you know any task will become urgent later, the best time to finish it is right now.

Second corollary: If you delay finishing a task until it becomes too urgent to further delay, you reduce your own agency and may have created avoidable additional damage to yourself and possibly others

The Tragedy of Future Commons

T

I can’t help but thinking that so many of today’s debates – from climate change to smart city governance and AI ethics – are so much more connected than we give them credit for. I might be projecting, but in my mind they’re just variations of one simple theme:

Do we narrow or broaden the future options space? In others words, will we leave the next generation, the public sector, or the other people around us more options or less options? Do we give them agency or take it away? And how can it ever be ok to act in a way that takes away future generations’ options? That strips governments of their chances to deliver services to their citizens?

It’s essentially the Tragedy of the Commons as applied to the time axis: The Tragedy of Future Commons. And we can choose very deliberately to strengthen the commons (now and for the future), to strengthen future generations in the face of climate change (where we might have hit another tipping point), to strengthen city governments in their ability to govern and deliver services by not hollowing them out, etc. etc. etc..

What actions that requires of us depends heavily on context of course: AI to be made with more participation and civil society involved so as to mitigate risks. Smart cities to prioritize public ownership and accountability so the city doesn’t lose its influence to the private sector. Climate change to be at the top of all our priority lists in order to give our future selves and future generations more and better options to shape their world and thrive in it.

Too often we’re stuck in debates that are based, essentially, in yesterday’s world. We need to realize the situation we’re in so as to avoid false choices. It’s not “climate or business”, it’s “climate or no business”. It’s not “climate or civil rights”, but “climate or no civil rights”. Radical changes are coming our way, and I’d rather shape them with intention and some buffer to spare rather than see them imposed on us like gravity imposed on Newton’s fabled apple.

So let’s aim for the opposite of the Tragedy of the Commons, whatever that might be called. The Thriving of the Commons?

And if you need a framework that’s decidedly not made for this purpose but has been holding up nicely for me, look to the Vision for a Shared Digital Europe (SDE) for inspiration. It lays out 4 pillars that I find pretty appealing: Cultivate the Commons; Decentralize Infrastructure; Enable Self-Determination; Empower Public Institutions. The authors drafted it with the EU’s digital agenda in mind (I was a very minor contributor, joining at a later stage). But I think it can apply meaningfully to smart cities just as much as it does to AI development and climate change and other areas. (Feel free to hit up the team to see how they might apply to your context, or reach out to me and I’ll be happy to put you in touch.) Those are good principles!

Note: This piece is cross-posted from my weekly newsletter Connection Problem, to which you can sign up here.

The key challenge for the industry in the next 5 years is consumer trust

T

Note: Every quarter or so I write our client newsletter. This time it touched on some aspects I figured might be useful to this larger audience, too, so I trust you’ll forgive me cross-posting this bit from the most recent newsletter.

Some questions I’ve been pondering and that we’ve been exploring in conversations with our peer group day in, day out.

This isn’t an exhaustive list, of course, but gives you a hint about my headspace?—?experience shows that this can serve as a solid early warning system for industry wide debates, too. Questions we’ve had on our collective minds:

1. What’s the relationship between (digital) technology and ethics/sustainability? There’s a major shift happening here, among consumers and industry, but I’m not yet 100% sure where we’ll end up. That’s a good thing, and makes for interesting questions. Excellent!

2. The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust. Between all the insecurities and data leaks and bricked devices and “sunsetted” services and horror stories about hacked toys and routers and cameras and vibrators and what have you, I’m 100% convinced that consumer trust?—?and products’ trustworthiness?—?is the key to success for the next 5 years of IoT. (We’ve been doing lots of work in that space, and hope to continue to work on this in 2018.)

3. Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting? It seems safe to assume that as deploying machine learning gets easier and cheaper every day we’ll see AI-like techniques thrown at every imaginable niche. Remember when everyone and their uncle had to have an app? It’s going to be like that but with AI. This is going to be interesting, and no doubt it’ll produce spectacular successes as well as fascinating failures.

4. What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go?

These are all interesting, deep topics to dig into. They’re all closely interrelated, too, and have implications on business, strategy, research, policy. We’ll continue to dig in.

But also, besides these larger, more complex questions there are smaller, more concrete things to explore:

  • What are new emerging technologies? Where are exciting new opportunities?
  • What will happen due to more ubiquitous autonomous vehicles, solar power, crypto currencies? What about LIDAR and Li-Fi?
  • How will the industry adapt to the European GDPR? Who will be the first players to turn data protection and scarcity into a strength, and score major wins? I’m convinced that going forward, consumer and data protection offer tremendous business opportunities.

If these themes resonate, or if you’re asking yourself “how can we get ahead in 2018 without compromising user rights”, let’s chat.

Want to work together? I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.

PS: I write another newsletter, too, in which I share regular project updates, thoughts on the most interesting articles I come across, and where I explore areas around tech, society, culture & business that I find relevant. To watch my thinking unfolding and maturing, this is for you. You can subscribe here.

Google’s new push to AI-powered services

G

At their Pixel 2 event at the beginning of the month, Google released a whole slew of new products. Besides new phones there were updated version of their smart home hub, Google Home, and some new types of product altogether.

I don’t usually write about product launches, but this event has me excited about new tech for the first time in a long time. Why? Because some aspects stood out as they stand for a larger shift in the industry: The new role of artificial intelligence (AI) as it seeps into consumer goods.

Google have been reframing themselves from a mobile first to an AI first company for the last year or so. (For full transparency I should add that I’ve worked with Google occasionally in the recent past, but everything discussed here is of course publicly available.)

We now see this shift of focus play out as it manifests in products.

Here’s Google CEO Sundar Pichai at the opening of Google’s Pixel 2 event:

We’re excited by the shift from a mobile-first to an AI-first world. It is not just about applying machine learning in our products, but it’s radically re-thinking how computing should work. (…) We’re really excited by this shift, and that’s why we’re here today. We’ve been working on software and hardware together because that’s the best way to drive the shifts in computing forward. But we think we’re in the unique moment in time where we can bring the unique combination of AI, and software, and hardware to bring the different perspective to solving problems for users. We’re very confident about our approach here because we’re at the forefront of driving the shifts with AI.
AI as a platform: Google has it.
First things first: I fully agree – there’s currently no other company that’s in as well positioned to drive the development of AI, or to benefit from it. In fact, back in May 2017 I wrote that “Google just won the next 10 years.” That was when Google just hinted at their capabilities in terms of new features, but also announced building AI infrastructure for third parties to use. AI as a platform: Google has it.

Before diving into some structural thoughts, let’s look at two specific products they launched:

  1. Google Clips are a camera you can clip somewhere, and it’ll automatically take photos when some conditions are met: A certain person’s face is in the picture, or they are smiling. It’s an odd product for sure, but here’s the thing: It’s fully machine learning powered facial recognition, and the computing happens on the device. This is remarkable for its incredible technical achievement, and for its approach. Google has become a company of high centralization—the bane of cloud computing, I’d lament. Google Clips works at the edge, decentralized. This is powerful, and I hope it inspires a new generation of IoT products that embrace decentralization.
  2. Google’s new in-ear headphones offer live translation. That’s right: These headphones should be able to allow for multi-language human-to-human live conversations. (This happens in the cloud, not locally.) Now how well this works in practice remains to be seen, and surely you wouldn’t want to run a work meeting through them. But even if it eases travel related helplessness just a bit it’d be a big deal.

So as we see these new products roll out, the actual potential becomes much more graspable. There’s a shape emerging from the fog: Google may not really be AI first just yet, but they certainly have made good progress on AI-leveraged services.

The mental model I’m using for how Apple and Google compare is this:


The Apple model

Apple’s ecosystem focuses on an integration: Hardware (phones, laptops) and software (OSX, iOS) are both highly integrated, and services are built on top. This allows for consistent service delivery and for pushing the limits of hardware and software alike, and most importantly for Apple’s bottom line allows to sell hardware that’s differentiated by software and services: Nobody else is allowed to make an iPhone.

Google started at the opposite side, with software (web search, then Android). Today, Google looks something like this:


The Google model

Based on software (search/discovery, plus Android) now there’s also hardware that’s more integrated. Note that Android is still the biggest smartphone platform as well as basis for lots of connected products, so Google’s hardware isn’t the only game in town. How this works out with partners over time remains to be seen. That said, this new structure means Google can push its software capabilities to the limits through their own hardware (phones, smart home hubs, headphones, etc.) and then aim for the stars with AI-leveraged services in a way I don’t think we’ll see from competitors anytime soon.

What we’ve seen so far is the very tip of the iceberg: As Google keeps investing in AI and exploring the applications enabled by machine learning, this top layer should become exponentially more interesting: They develop not just the concrete services we see in action, but also use AI to build their new models, and open up AI as a service for other organizations. It’s a triple AI ecosystem play that should reinforce itself and hence gather more steam the more it’s used.

This offers tremendous opportunities and challenges. So while it’s exciting to see this unfold, we need to get our policies ready for futures with AI.

Please note that this is cross-posted from Medium. Disclosure: I’ve worked with Google a few times in the recent past.

Getting our policies ready for AI futures

G

In late 2016, the White House published a report, “Artificial Intelligence, Automation, and the Economy” (PDF). It’s a solid work of research and forecasting, and proposes equally solid policy recommendations. Here’s part of the framing, from the report’s intro:

AI-driven automation will continue to create wealth and expand the American economy in the coming years, but, while many will benefit, that growth will not be costless and will be accompanied by changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.

This cuts right to the chase: Artificial intelligence (AI) will create wealth, and it will replace jobs. AI will change the future of work, and the economy.

AI will change the future of work, and the economy.

Revisiting this report made me wonder if similar policy research exists in Germany and at the European level? A quick search online brought up bits and pieces (Merkel arguing for bundling know-how for AI and acknowledging that AI spending is low in Europe, demands for transparency in algorithms). However, there doesn’t seem to be an overarching guiding policy. (I asked federal government spokesperson Steffen Seibert on Twitter, but so far he hasn’t responded. Which is fair—why would he!)

Germany has a mixed track record of tech policy

For the record: In other areas, Germany is making good progress. Take autonomous driving, for example. Germany just adopted an action plan on automated driving that regulates key points of how autonomous vehicles should behave on the street—and regulates it well! Key points include that autonomous driving is worth promoting because it causes fewer accidents, dictates that damage to property must take precedence over personal injury (aka life has priority), and that in unavoidable accident situations there may not be any discrimination between individuals based on age, gender, etc. It even includes data sovereignty for drivers. Well done!

On the other hand, for the Internet of Things (IoT) Germany squandered opportunities in that IoT is framed almost exclusively as industrial IoT under the banner of Industrie 4.0. This is understandable given Germany’s manufacturing-focused economy, but it excludes a huge amount of super interesting and promising IoT. It’s clearly the result of successful lobbying but at the expense at a more inclusive, diverse portfolio of opportunities.

So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.

So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.

Update: The Federal Ministry of Education and Research recently announced an initiative to explore AI: Plattform Lernende Systeme (“platform living systems”). Thanks to Christian Katzenbach for the pointer!

AI & the future of work

The White House AI report talks a lot about the future of work, and of employment specifically. This makes sense: It’s one of the key aspects of AI. (Some others are, I’d say, opportunity for the creation of wealth on one side and algorithmic discrimination on the other.)

How AI will impact the work force, the economy, and the role of the individual is something we can only speculate about today.

In a recent workshop with stipendiaries of the Heinrich-Böll-Foundation on the future of work we explored how digital, AI, IoT and adjacent technologies impact how we work, and how we think about work. It was super interesting to see this diverse group of very, very capable students and young professionals bang their heads against the complexities in this space. Their findings mirrored what experts across the field also have been finding: That there are no simple answers, and most likely we’ll see huge gains in some areas and huge losses in others.

Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity.

The one thing I’d say is a safe bet is this: Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity. In other words, some human workers will be super-powered by AI (and related technologies), whereas others will fall by the wayside.

Over on Ribbonfarm, Venkatesh Rao phrases this very elegantly: Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.” Which of course conjures to mind images of roboticized warehouses, like this one:

Just to be clear, this is a contemporary warehouse in China. Amazon runs similar operations. This isn’t the future, this is the well-established present.

Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.”

I’d like to stress that I don’t think a robot warehouse is inherently good or bad. It depends on the policies that make sure the humans in the picture do well.

Education is key

So where are we in Europe again? In Germany, we still try to define what IoT and AI means. In China it’s been happening for years.

This picture shows a smart lamp in Shenzhen that we found in a maker space:

What does the lamp do? It tracks if users are nearby, so it can switch itself off when nobody’s around. It automatically adjusts light the temperature depending on the light in the room. As smart lamps go, these features are okay: Not horrible, not interesting. If it came out of Samsung or LG or Amazon I wouldn’t be surprised.

So what makes it special? This smart lamp was built by a group of fifth graders. That’s right: Ten and eleven year olds designed, programmed, and built this. Because the curriculum for local students includes the skills that enable them to do this. In Europe, this is unheard of.

I think the gap in skills regarding artificial intelligence is most likely quite similar. And I’m not just talking about the average individual: I’m talking about readiness at the government level, too. Our governments aren’t ready for AI.

Our governments aren’t ready for AI.

It’s about time we start getting ready for AI, IoT, and robotics. Always a fast mover, Estonia considers a law to legalize AI, and they smartly kick off this process with a multi-stakeholder process.

What to do?

In Germany, the whole discussion is still in its earliest stages. Let’s not fall into the same trap as we did for IoT: Both IoT and AI are more than just industry. They are both broader and deeper than the adjective industrial implies.

The White House report can provide some inspiration, especially around education policy.

We need to invest in what OECD calls the active labor market policies, i.e. training and skill development for adults. We need to update our school curricula to get youths ready for the future with both hands-on applicable skills (coding, data analysis, etc.) and with the larger contextual meta skills to make smart decisions (think humanities, history, deep learning).

We need to reform immigration to allow for the best talent to come to Europe more easily (and allow for voting rights, too, because nobody feels at home where they pay taxes with no representation).

Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.

Zooming out to the really big picture, we need to start completely reforming our social security systems for an AI world that might not deliver full employment ever again. This could include Universal Basic Income, or maybe rather Universal Basic Services, or a different approach altogether.

This requires capacity building on the side of our government. Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.

But I know one thing: We need to kick off this process today.

///

Please note: This is cross-posted from Medium.

Opportunities at the intersection of emerging tech, strategy, and good ethics

O

We strongly believe that good ethics mean good business. This isn’t just an empty phrase, either: We know from our own experience that often it pays great dividends to go the extra step and taking into account the implications of business decisions.

This is especially true in areas that employ new technologies, simply because there are more unknowns in emerging tech. And more unknowns = higher risks.

Our field of operation is at the intersection of emerging tech, strategy, and good business ethics.

Take, for example, the global tech company’s VP who adapted community-driven guidelines for data ownership in IoT: He knew that this particular pioneer community had a deeper understanding than most of the issues at stake. Even though these data ownership guidelines meant possibly losing some short term revenue gains, he trusted in their long-term positive side effects. Now, and at the time unexpectedly, his organization is in a better position than most to comply with the new EU data protection regulation (GDPR). Even before that, these guidelines likely inspired user trust and confidence.

Other companies lose their best talents because of sketchy business tactics—to those who are honest and trustworthy, and have a credible and powerful mission.

If you pay attention you’ll find these examples everywhere: Good ethics aren’t a buzzword, nor are they rocket science. They’re 100% compatible with good business. They might just be a requisite.