Before we’re headed into the long Easter Holiday weekend, a quick rundown of what happened in March.
Mozilla Fellowship & an open trustmark for IoT
I’m happy to share that I’ve joined the Mozilla Fellows program (concretely, the IoT fellows group to work with Jon Rogers and Julia Kloiber), and that Mozilla supports the development of an open trustmark for IoT under the ThingsCon umbrella.
(As always, a full disclosure: My partner works for Mozilla.)
I had already shared first thoughts on the IoT trustmark. We’ll have a lot more to share on the development of the trustmark now that it’s becoming more official. You can follow along here and over on the ThingsCon blog.
By the way, this needs a catchy name. Hit me up if you have one in mind we could use!
We also had an interview with Deutsche Welle. We’ll share it once it’s available online.
It’s great that this little passion project of ours is getting this attention, and truly humbled also by the super high quality feedback and engagement from our customers. What a lovely crowd! ?
Learning about Machine Learning
I’ve started Andrew Ng’s Machine Learning Stanford course on Coursera. Due to time constraints it’s slow going for me, and as expected, it’s a bit math heavy for my personal taste but even if you don’t aim to necessarily implement any machine learning or code to that effect there’s a lot to take away. Two thumbs up.
Notes from a couple of events on responsible tech
Aspen Institute: I was kindly invited to an event by Aspen Institute Germany about the impact of AI on society and humanity. One panel stood out to me: It was about AI in the context of autonomous weapons systems. I was positively surprised to hear that
All panelists agreed that if autonomous weapons systems, then only with humans in the loop.
There haven’t been significant cases of rogue actors deploying autonomous weapons, which strikes me as good to hear but also very surprising.
A researcher from the Bundeswehr University Munich pointed out that introducing autonomous systems introduces instability, pointing out the possibility of flash wars triggered by fully autonomous systems interacting with one another (like flash crashes in stock markets).
In the backend of military logistics, machine learning appears to already be a big deal.
Digital Asia Hub & HiiG:Malavika Jayaram kindly invited me to a small workshop with Digital Asia Hub and the Humboldt Institute for Internet and Society (in the German original abbreviated as HiiG). It was part of a fact finding trip to various regions and tech ecosystems to figure out which items are most important from a regulatory and policy perspective, and to feed the findings from these workshops into policy conversations in the APAC region. This was super interesting, especially because of the global input. I was particularly fascinated to see that Berlin hosts all kinds of tech ethics folks, some of which I knew and some of which I didn’t, so that’s cool.
Both are also covered in my newsletter, so I won’t just replicate everything here. You can dig into the archives from the last few weeks.
Thinking & writing
Season 3 of my somewhat more irreverent newsletter, Connection Problem, is coming up on its 20th issue. You can sign up here to see where my head is these days.
If you’d like to work with me in the upcoming months, I have very limited availability but happy to have a chat.
That’s it for today. Have a great Easter weekend and an excellent April!
Note: Every quarter or so I write our client newsletter. This time it touched on some aspects I figured might be useful to this larger audience, too, so I trust you’ll forgive me cross-posting this bit from the most recent newsletter.
Some questions I’ve been pondering and that we’ve been exploring in conversations with our peer group day in, day out.
This isn’t an exhaustive list, of course, but gives you a hint about my headspace?—?experience shows that this can serve as a solid early warning system for industry wide debates, too.
Questions we’ve had on our collective minds:
1. What’s the relationship between (digital) technology and ethics/sustainability?
There’s a major shift happening here, among consumers and industry, but I’m not yet 100% sure where we’ll end up. That’s a good thing, and makes for interesting questions. Excellent!
2. The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust.
Between all the insecurities and data leaks and bricked devices and “sunsetted” services and horror stories about hacked toys and routers and cameras and vibrators and what have you, I’m 100% convinced that consumer trust?—?and products’ trustworthiness?—?is the key to success for the next 5 years of IoT. (We’ve been doing lots of work in that space, and hope to continue to work on this in 2018.)
3. Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting?
It seems safe to assume that as deploying machine learning gets easier and cheaper every day we’ll see AI-like techniques thrown at every imaginable niche. Remember when everyone and their uncle had to have an app? It’s going to be like that but with AI. This is going to be interesting, and no doubt it’ll produce spectacular successes as well as fascinating failures.
4. What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go?
These are all interesting, deep topics to dig into. They’re all closely interrelated, too, and have implications on business, strategy, research, policy. We’ll continue to dig in.
But also, besides these larger, more complex questions there are smaller, more concrete things to explore:
What are new emerging technologies? Where are exciting new opportunities?
What will happen due to more ubiquitous autonomous vehicles, solar power, crypto currencies? What about LIDAR and Li-Fi?
How will the industry adapt to the European GDPR? Who will be the first players to turn data protection and scarcity into a strength, and score major wins? I’m convinced that going forward, consumer and data protection offer tremendous business opportunities.
If these themes resonate, or if you’re asking yourself “how can we get ahead in 2018 without compromising user rights”, let’s chat.
Want to work together? I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.
PS: I write another newsletter, too, in which I share regular project updates, thoughts on the most interesting articles I come across, and where I explore areas around tech, society, culture & business that I find relevant. To watch my thinking unfolding and maturing, this is for you. You can subscribe here.
The end of the year is a good time to look back and take stock, and one of the things I’ve been looking at especially is how the focus of my work has been shifting over the years.
I’ve been using the term emerging technologies to describe where my interests and expertise are, because it describes clearly that the concrete focus is (by definition!) constantly evolving. Frequently, the patterns become obvious only in hindsight. Here’s how I would describe the areas I focused on primarily over the last decade or so:
Focus areas over time (Image: The Waving Cat)
Now this isn’t a super accurate depiction, but it gives a solid idea. I expect the Internet of Things to remain a priority for the coming years, but it’s also obvious that algorithmic decision-making and its impact (labeled here as artificial intelligence) is gaining importance, and quickly. The lines are blurry to begin with.
It’s worth noting that these timelines aren’t absolutes, either: I’ve done work around the implications of social media later than that, and work on algorithms and data long before. These labels indicated priorities and focus more than anything.
So anyway, hope this is helpful to understand my work. As always, if you’d like to bounce ideas feel free to ping me.
These are just some of the emerging tech news that crossed my screen within the last few hours. And it’s a tiny chunk. Meanwhile a couple weeks ago or so, a group of benevolent technologists gathered in Norway, in the set of Ex Machina to discuss futures of AI.
In my work and research, AI and robotics have shot up to the top right alongside (all of a sudden seemingly much more tame-seeming) IoT.
This tracks completely with an impression I’ve had for the last couple of months: It seems we’re at some kind of inflection point. Out of the many parts of the puzzle called “emerging tech meets society”, the various bits and pieces have started linking up for real, and a clearer shape is emerging.
We know (most of) the major actors: The large tech companies (GAFAM & co) and their positions; which governments embrace—or shy away from—various technologies, approaches, and players; even civil society has been re-arranging itself.
We see old alliance break apart, and new ones emerging. It’s still very, very fluid. But the opening moves are now behind us, and the game is afoot.
The next few years will be interesting, that much is certain. Let’s get them right. There’s lots to be done.
At their Pixel 2 event at the beginning of the month, Google released a whole slew of new products. Besides new phones there were updated version of their smart home hub, Google Home, and some new types of product altogether.
I don’t usually write about product launches, but this event has me excited about new tech for the first time in a long time. Why? Because some aspects stood out as they stand for a larger shift in the industry: The new role of artificial intelligence (AI) as it seeps into consumer goods.
Google have been reframing themselves from a mobile first to an AI first company for the last year or so. (For full transparency I should add that I’ve worked with Google occasionally in the recent past, but everything discussed here is of course publicly available.)
We now see this shift of focus play out as it manifests in products.
Here’s Google CEO Sundar Pichai at the opening of Google’s Pixel 2 event:
We’re excited by the shift from a mobile-first to an AI-first world. It is not just about applying machine learning in our products, but it’s radically re-thinking how computing should work. (…) We’re really excited by this shift, and that’s why we’re here today. We’ve been working on software and hardware together because that’s the best way to drive the shifts in computing forward. But we think we’re in the unique moment in time where we can bring the unique combination of AI, and software, and hardware to bring the different perspective to solving problems for users. We’re very confident about our approach here because we’re at the forefront of driving the shifts with AI.
AI as a platform: Google has it.
First things first: I fully agree – there’s currently no other company that’s in as well positioned to drive the development of AI, or to benefit from it. In fact, back in May 2017 I wrote that “Google just won the next 10 years.” That was when Google just hinted at their capabilities in terms of new features, but also announced building AI infrastructure for third parties to use. AI as a platform: Google has it.
Before diving into some structural thoughts, let’s look at two specific products they launched:
Google Clips are a camera you can clip somewhere, and it’ll automatically take photos when some conditions are met: A certain person’s face is in the picture, or they are smiling. It’s an odd product for sure, but here’s the thing: It’s fully machine learning powered facial recognition, and the computing happens on the device. This is remarkable for its incredible technical achievement, and for its approach. Google has become a company of high centralization—the bane of cloud computing, I’d lament. Google Clips works at the edge, decentralized. This is powerful, and I hope it inspires a new generation of IoT products that embrace decentralization.
Google’s new in-ear headphones offer live translation. That’s right: These headphones should be able to allow for multi-language human-to-human live conversations. (This happens in the cloud, not locally.) Now how well this works in practice remains to be seen, and surely you wouldn’t want to run a work meeting through them. But even if it eases travel related helplessness just a bit it’d be a big deal.
So as we see these new products roll out, the actual potential becomes much more graspable. There’s a shape emerging from the fog: Google may not really be AI first just yet, but they certainly have made good progress on AI-leveraged services.
The mental model I’m using for how Apple and Google compare is this:
Apple’s ecosystem focuses on an integration: Hardware (phones, laptops) and software (OSX, iOS) are both highly integrated, and services are built on top. This allows for consistent service delivery and for pushing the limits of hardware and software alike, and most importantly for Apple’s bottom line allows to sell hardware that’s differentiated by software and services: Nobody else is allowed to make an iPhone.
Google started at the opposite side, with software (web search, then Android). Today, Google looks something like this:
Based on software (search/discovery, plus Android) now there’s also hardware that’s more integrated. Note that Android is still the biggest smartphone platform as well as basis for lots of connected products, so Google’s hardware isn’t the only game in town. How this works out with partners over time remains to be seen. That said, this new structure means Google can push its software capabilities to the limits through their own hardware (phones, smart home hubs, headphones, etc.) and then aim for the stars with AI-leveraged services in a way I don’t think we’ll see from competitors anytime soon.
What we’ve seen so far is the very tip of the iceberg: As Google keeps investing in AI and exploring the applications enabled by machine learning, this top layer should become exponentially more interesting: They develop not just the concrete services we see in action, but also use AI to build their new models, and open up AI as a service for other organizations. It’s a triple AI ecosystem play that should reinforce itself and hence gather more steam the more it’s used.
AI-driven automation will continue to create wealth and expand the American economy in the coming years, but, while many will benefit, that growth will not be costless and will be accompanied by changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.
This cuts right to the chase: Artificial intelligence (AI) will create wealth, and it will replace jobs. AI will change the future of work, and the economy.
AI will change the future of work, and the economy.
For the record: In other areas, Germany is making good progress. Take autonomous driving, for example. Germany just adopted an action plan on automated driving that regulates key points of how autonomous vehicles should behave on the street—and regulates it well! Key points include that autonomous driving is worth promoting because it causes fewer accidents, dictates that damage to property must take precedence over personal injury (aka life has priority), and that in unavoidable accident situations there may not be any discrimination between individuals based on age, gender, etc. It even includes data sovereignty for drivers. Well done!
On the other hand, for the Internet of Things (IoT) Germany squandered opportunities in that IoT is framed almost exclusively as industrial IoT under the banner of Industrie 4.0. This is understandable given Germany’s manufacturing-focused economy, but it excludes a huge amount of super interesting and promising IoT. It’s clearly the result of successful lobbying but at the expense at a more inclusive, diverse portfolio of opportunities.
So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.
So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.
Update: The Federal Ministry of Education and Research recently announced an initiative to explore AI: Plattform Lernende Systeme (“platform living systems”). Thanks to Christian Katzenbach for the pointer!
AI & the future of work
The White House AI report talks a lot about the future of work, and of employment specifically. This makes sense: It’s one of the key aspects of AI. (Some others are, I’d say, opportunity for the creation of wealth on one side and algorithmic discrimination on the other.)
How AI will impact the work force, the economy, and the role of the individual is something we can only speculate about today.
In a recent workshop with stipendiaries of the Heinrich-Böll-Foundation on the future of work we explored how digital, AI, IoT and adjacent technologies impact how we work, and how we think about work. It was super interesting to see this diverse group of very, very capable students and young professionals bang their heads against the complexities in this space. Their findings mirrored what experts across the field also have been finding: That there are no simple answers, and most likely we’ll see huge gains in some areas and huge losses in others.
Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity.
The one thing I’d say is a safe bet is this: Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity. In other words, some human workers will be super-powered by AI (and related technologies), whereas others will fall by the wayside.
Over on Ribbonfarm, Venkatesh Rao phrases this very elegantly: Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.” Which of course conjures to mind images of roboticized warehouses, like this one:
Just to be clear, this is a contemporary warehouse in China. Amazon runs similar operations. This isn’t the future, this is the well-established present.
Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.”
I’d like to stress that I don’t think a robot warehouse is inherently good or bad. It depends on the policies that make sure the humans in the picture do well.
Education is key
So where are we in Europe again? In Germany, we still try to define what IoT and AI means. In China it’s been happening for years.
This picture shows a smart lamp in Shenzhen that we found in a maker space:
What does the lamp do? It tracks if users are nearby, so it can switch itself off when nobody’s around. It automatically adjusts light the temperature depending on the light in the room. As smart lamps go, these features are okay: Not horrible, not interesting. If it came out of Samsung or LG or Amazon I wouldn’t be surprised.
So what makes it special? This smart lamp was built by a group of fifth graders. That’s right: Ten and eleven year olds designed, programmed, and built this. Because the curriculum for local students includes the skills that enable them to do this. In Europe, this is unheard of.
I think the gap in skills regarding artificial intelligence is most likely quite similar. And I’m not just talking about the average individual: I’m talking about readiness at the government level, too. Our governments aren’t ready for AI.
Our governments aren’t ready for AI.
It’s about time we start getting ready for AI, IoT, and robotics. Always a fast mover, Estonia considers a law to legalize AI, and they smartly kick off this process with a multi-stakeholder process.
What to do?
In Germany, the whole discussion is still in its earliest stages. Let’s not fall into the same trap as we did for IoT: Both IoT and AI are more than just industry. They are both broader and deeper than the adjective industrial implies.
The White House report can provide some inspiration, especially around education policy.
We need to invest in what OECD calls the active labor market policies, i.e. training and skill development for adults. We need to update our school curricula to get youths ready for the future with both hands-on applicable skills (coding, data analysis, etc.) and with the larger contextual meta skills to make smart decisions (think humanities, history, deep learning).
We need to reform immigration to allow for the best talent to come to Europe more easily (and allow for voting rights, too, because nobody feels at home where they pay taxes with no representation).
Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.
Zooming out to the really big picture, we need to start completely reforming our social security systems for an AI world that might not deliver full employment ever again. This could include Universal Basic Income, or maybe rather Universal Basic Services, or a different approach altogether.
This requires capacity building on the side of our government. Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.
But I know one thing: We need to kick off this process today.