TagAI

Trust Indicators for Emerging Technologies

T

For the Trustable Technology Mark, we identified 5 dimensions that indicate trustworthiness. Let’s call them trust indicators:

  • Privacy & Data Practices: Does it respect users’ privacy and protect their data rights?
  • Transparency: Is it clear to users what the device and the underlying services do and are capable of doing?
  • Security: Is the device secure and safe to use? Are there safeguards against data leaks and the like?
  • Stability: How long a life cycle can users expect from the device, and how robust are the underlying services? Will it continue to work if the company gets acquired, goes belly-up, or stops maintenance?
  • Openness: Is it built on open source or around open data, and/or contributes to open source or open data? (Note: We treat Openness not as a requirement for consumer IoT but as an enabler of trustworthiness.)

Now these 5 trust indicators—and the questions we use in the Trustable Technology Mark to assess them—are designed for the context of consumer products. Think smart home devices, fitness trackers, connected speakers or light bulbs. They work pretty well for that context.

Over the last few months, it has become clear that there’s demand for similar trust indicators for areas other than consumer products like smart cities, artificial intelligence, and other areas of emerging technology.

I’ve been invited to a number of workshops and meetings exploring those areas, often in the context of policy making. So I want to share some early thoughts on how we might be able to translate these trust indicators from a consumer product context to these other areas. Please note that the devil is in the detail: This is early stage thinking, and the real work begins at the stage where the assessment questions and mechanisms are defined.

The main difference between consumer context and publicly deployed technology—infrastructure!—means that we need to focus even most strongly on safeguards, inclusion, and resilience. If consumer goods stop working, there’s real damage, like lost income and the like, but in the bigger picture, failing consumer goods are mostly a quality of life issue; and in the case of consumer IoT space, mostly for the affluent. (Meaning that if we’re talking about failure to operate rather than data leaks, the damage has a high likelihood of being relatively harmless.)

For publicly deployed infrastructure, we are looking at a very different picture with vastly different threat models and potential damage. Infrastructure that not everybody can rely on—equally, and all the time—would not just be annoying, it might be critical.

After dozens of conversations with people in this space, and based on the research I’ve been doing both for the Trustable Technology Mark and my other work with both ThingsCon and The Waving Cat, here’s a snapshot of my current thinking. This is explicitly intended to start a debate that can inform policy decisions for a wide range of areas where emerging technologies might play a role:

  • Privacy & Data Practices: Privacy and good data protection practices are as essential in public space as in the consumer space, even though the implications and tradeoffs might be different ones.
  • Transparency & Accountability: Transparency is maybe even more relevant in this context, and I propose adding Accountability as an equally important aspect. This holds especially true where commercial enterprises install and possibly maintain large scale networked public infrastructure, like in the context of smart cities.
  • Security: Just as important, if not more so.
  • Resilience: Especially for smart cities (but I imagine the same holds true for other areas), we should optimize for Resilience. Smart city systems need to work, even if parts fail. Decentralization, openness, interoperability and participatory processes are all strategies that can increase Resilience.
  • Openness: Unlike in the consumer space, I consider openness (open source, open data, open access) essential in networked public infrastructure—especially smart city technology. This is also a foundational building block for civic tech initiatives to be effective.

There are inherent conflicts and tradeoffs between these trust indicators. But **if we take them as guiding principles to discuss concrete issues in their real contexts, I believe they can be a solid starting point. **

I’ll keep thinking about this, and might adjust this over time. In the meantime, I’m keen to hear what you think. If you have thoughts to share, drop me a line or hit me up on Twitter.

Monthnotes for March 2018

M

Before we’re headed into the long Easter Holiday weekend, a quick rundown of what happened in March.

Mozilla Fellowship & an open trustmark for IoT

I’m happy to share that I’ve joined the Mozilla Fellows program (concretely, the IoT fellows group to work with Jon Rogers and Julia Kloiber), and that Mozilla supports the development of an open trustmark for IoT under the ThingsCon umbrella.

There’s no doubt going to be a more formal announcement soon, but here’s the shortest of blog posts over on ThingsCon.

(As always, a full disclosure: My partner works for Mozilla.)

I had already shared first thoughts on the IoT trustmark. We’ll have a lot more to share on the development of the trustmark now that it’s becoming more official. You can follow along here and over on the ThingsCon blog.

By the way, this needs a catchy name. Hit me up if you have one in mind we could use!

Zephyr interviews: The Craftsman, Deutsche Welle

We were humbled and delighted that Gianfranco Chicco covered Zephyr Berlin in the recent issue of his most excellent newsletter, The Craftsman. Links and some background here.

We also had an interview with Deutsche Welle. We’ll share it once it’s available online.

It’s great that this little passion project of ours is getting this attention, and truly humbled also by the super high quality feedback and engagement from our customers. What a lovely crowd! ?

Learning about Machine Learning

I’ve started Andrew Ng’s Machine Learning Stanford course on Coursera. Due to time constraints it’s slow going for me, and as expected, it’s a bit math heavy for my personal taste but even if you don’t aim to necessarily implement any machine learning or code to that effect there’s a lot to take away. Two thumbs up.

Notes from a couple of events on responsible tech

Aspen Institute: I was kindly invited to an event by Aspen Institute Germany about the impact of AI on society and humanity. One panel stood out to me: It was about AI in the context of autonomous weapons systems. I was positively surprised to hear that

  1. All panelists agreed that if autonomous weapons systems, then only with humans in the loop.
  2. There haven’t been significant cases of rogue actors deploying autonomous weapons, which strikes me as good to hear but also very surprising.
  3. A researcher from the Bundeswehr University Munich pointed out that introducing autonomous systems introduces instability, pointing out the possibility of flash wars triggered by fully autonomous systems interacting with one another (like flash crashes in stock markets).
  4. In the backend of military logistics, machine learning appears to already be a big deal.

Digital Asia Hub & HiiG: Malavika Jayaram kindly invited me to a small workshop with Digital Asia Hub and the Humboldt Institute for Internet and Society (in the German original abbreviated as HiiG). It was part of a fact finding trip to various regions and tech ecosystems to figure out which items are most important from a regulatory and policy perspective, and to feed the findings from these workshops into policy conversations in the APAC region. This was super interesting, especially because of the global input. I was particularly fascinated to see that Berlin hosts all kinds of tech ethics folks, some of which I knew and some of which I didn’t, so that’s cool.

Both are also covered in my newsletter, so I won’t just replicate everything here. You can dig into the archives from the last few weeks.

Thinking & writing

Season 3 of my somewhat more irreverent newsletter, Connection Problem, is coming up on its 20th issue. You can sign up here to see where my head is these days.

If you’d like to work with me in the upcoming months, I have very limited availability but happy to have a chat.

That’s it for today. Have a great Easter weekend and an excellent April!

The key challenge for the industry in the next 5 years is consumer trust

T

Note: Every quarter or so I write our client newsletter. This time it touched on some aspects I figured might be useful to this larger audience, too, so I trust you’ll forgive me cross-posting this bit from the most recent newsletter.

Some questions I’ve been pondering and that we’ve been exploring in conversations with our peer group day in, day out.

This isn’t an exhaustive list, of course, but gives you a hint about my headspace?—?experience shows that this can serve as a solid early warning system for industry wide debates, too. Questions we’ve had on our collective minds:

1. What’s the relationship between (digital) technology and ethics/sustainability? There’s a major shift happening here, among consumers and industry, but I’m not yet 100% sure where we’ll end up. That’s a good thing, and makes for interesting questions. Excellent!

2. The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust. Between all the insecurities and data leaks and bricked devices and “sunsetted” services and horror stories about hacked toys and routers and cameras and vibrators and what have you, I’m 100% convinced that consumer trust?—?and products’ trustworthiness?—?is the key to success for the next 5 years of IoT. (We’ve been doing lots of work in that space, and hope to continue to work on this in 2018.)

3. Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting? It seems safe to assume that as deploying machine learning gets easier and cheaper every day we’ll see AI-like techniques thrown at every imaginable niche. Remember when everyone and their uncle had to have an app? It’s going to be like that but with AI. This is going to be interesting, and no doubt it’ll produce spectacular successes as well as fascinating failures.

4. What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go?

These are all interesting, deep topics to dig into. They’re all closely interrelated, too, and have implications on business, strategy, research, policy. We’ll continue to dig in.

But also, besides these larger, more complex questions there are smaller, more concrete things to explore:

  • What are new emerging technologies? Where are exciting new opportunities?
  • What will happen due to more ubiquitous autonomous vehicles, solar power, crypto currencies? What about LIDAR and Li-Fi?
  • How will the industry adapt to the European GDPR? Who will be the first players to turn data protection and scarcity into a strength, and score major wins? I’m convinced that going forward, consumer and data protection offer tremendous business opportunities.

If these themes resonate, or if you’re asking yourself “how can we get ahead in 2018 without compromising user rights”, let’s chat.

Want to work together? I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.

PS: I write another newsletter, too, in which I share regular project updates, thoughts on the most interesting articles I come across, and where I explore areas around tech, society, culture & business that I find relevant. To watch my thinking unfolding and maturing, this is for you. You can subscribe here.

Focus areas over time

F

The end of the year is a good time to look back and take stock, and one of the things I’ve been looking at especially is how the focus of my work has been shifting over the years.

I’ve been using the term emerging technologies to describe where my interests and expertise are, because it describes clearly that the concrete focus is (by definition!) constantly evolving. Frequently, the patterns become obvious only in hindsight. Here’s how I would describe the areas I focused on primarily over the last decade or so:

focus areas over time Focus areas over time (Image: The Waving Cat)

Now this isn’t a super accurate depiction, but it gives a solid idea. I expect the Internet of Things to remain a priority for the coming years, but it’s also obvious that algorithmic decision-making and its impact (labeled here as artificial intelligence) is gaining importance, and quickly. The lines are blurry to begin with.

It’s worth noting that these timelines aren’t absolutes, either: I’ve done work around the implications of social media later than that, and work on algorithms and data long before. These labels indicated priorities and focus more than anything.

So anyway, hope this is helpful to understand my work. As always, if you’d like to bounce ideas feel free to ping me.

AI, IoT, Robotics: We’re at an inflection point for emerging technologies

A

The UAE appointed a minister for AI. Saudia Arabia announced plans to build a smart city 33x the size of New York City, and granted “citizenship” to a robot.

These are just some of the emerging tech news that crossed my screen within the last few hours. And it’s a tiny chunk. Meanwhile a couple weeks ago or so, a group of benevolent technologists gathered in Norway, in the set of Ex Machina to discuss futures of AI.

In my work and research, AI and robotics have shot up to the top right alongside (all of a sudden seemingly much more tame-seeming) IoT.

This tracks completely with an impression I’ve had for the last couple of months: It seems we’re at some kind of inflection point. Out of the many parts of the puzzle called “emerging tech meets society”, the various bits and pieces have started linking up for real, and a clearer shape is emerging.

We know (most of) the major actors: The large tech companies (GAFAM & co) and their positions; which governments embrace—or shy away from—various technologies, approaches, and players; even civil society has been re-arranging itself.

We see old alliance break apart, and new ones emerging. It’s still very, very fluid. But the opening moves are now behind us, and the game is afoot.

The next few years will be interesting, that much is certain. Let’s get them right. There’s lots to be done.

Google’s new push to AI-powered services

G

At their Pixel 2 event at the beginning of the month, Google released a whole slew of new products. Besides new phones there were updated version of their smart home hub, Google Home, and some new types of product altogether.

I don’t usually write about product launches, but this event has me excited about new tech for the first time in a long time. Why? Because some aspects stood out as they stand for a larger shift in the industry: The new role of artificial intelligence (AI) as it seeps into consumer goods.

Google have been reframing themselves from a mobile first to an AI first company for the last year or so. (For full transparency I should add that I’ve worked with Google occasionally in the recent past, but everything discussed here is of course publicly available.)

We now see this shift of focus play out as it manifests in products.

Here’s Google CEO Sundar Pichai at the opening of Google’s Pixel 2 event:

We’re excited by the shift from a mobile-first to an AI-first world. It is not just about applying machine learning in our products, but it’s radically re-thinking how computing should work. (…) We’re really excited by this shift, and that’s why we’re here today. We’ve been working on software and hardware together because that’s the best way to drive the shifts in computing forward. But we think we’re in the unique moment in time where we can bring the unique combination of AI, and software, and hardware to bring the different perspective to solving problems for users. We’re very confident about our approach here because we’re at the forefront of driving the shifts with AI.
AI as a platform: Google has it.
First things first: I fully agree – there’s currently no other company that’s in as well positioned to drive the development of AI, or to benefit from it. In fact, back in May 2017 I wrote that “Google just won the next 10 years.” That was when Google just hinted at their capabilities in terms of new features, but also announced building AI infrastructure for third parties to use. AI as a platform: Google has it.

Before diving into some structural thoughts, let’s look at two specific products they launched:

  1. Google Clips are a camera you can clip somewhere, and it’ll automatically take photos when some conditions are met: A certain person’s face is in the picture, or they are smiling. It’s an odd product for sure, but here’s the thing: It’s fully machine learning powered facial recognition, and the computing happens on the device. This is remarkable for its incredible technical achievement, and for its approach. Google has become a company of high centralization—the bane of cloud computing, I’d lament. Google Clips works at the edge, decentralized. This is powerful, and I hope it inspires a new generation of IoT products that embrace decentralization.
  2. Google’s new in-ear headphones offer live translation. That’s right: These headphones should be able to allow for multi-language human-to-human live conversations. (This happens in the cloud, not locally.) Now how well this works in practice remains to be seen, and surely you wouldn’t want to run a work meeting through them. But even if it eases travel related helplessness just a bit it’d be a big deal.

So as we see these new products roll out, the actual potential becomes much more graspable. There’s a shape emerging from the fog: Google may not really be AI first just yet, but they certainly have made good progress on AI-leveraged services.

The mental model I’m using for how Apple and Google compare is this:


The Apple model

Apple’s ecosystem focuses on an integration: Hardware (phones, laptops) and software (OSX, iOS) are both highly integrated, and services are built on top. This allows for consistent service delivery and for pushing the limits of hardware and software alike, and most importantly for Apple’s bottom line allows to sell hardware that’s differentiated by software and services: Nobody else is allowed to make an iPhone.

Google started at the opposite side, with software (web search, then Android). Today, Google looks something like this:


The Google model

Based on software (search/discovery, plus Android) now there’s also hardware that’s more integrated. Note that Android is still the biggest smartphone platform as well as basis for lots of connected products, so Google’s hardware isn’t the only game in town. How this works out with partners over time remains to be seen. That said, this new structure means Google can push its software capabilities to the limits through their own hardware (phones, smart home hubs, headphones, etc.) and then aim for the stars with AI-leveraged services in a way I don’t think we’ll see from competitors anytime soon.

What we’ve seen so far is the very tip of the iceberg: As Google keeps investing in AI and exploring the applications enabled by machine learning, this top layer should become exponentially more interesting: They develop not just the concrete services we see in action, but also use AI to build their new models, and open up AI as a service for other organizations. It’s a triple AI ecosystem play that should reinforce itself and hence gather more steam the more it’s used.

This offers tremendous opportunities and challenges. So while it’s exciting to see this unfold, we need to get our policies ready for futures with AI.

Please note that this is cross-posted from Medium. Disclosure: I’ve worked with Google a few times in the recent past.

Getting our policies ready for AI futures

G

In late 2016, the White House published a report, “Artificial Intelligence, Automation, and the Economy” (PDF). It’s a solid work of research and forecasting, and proposes equally solid policy recommendations. Here’s part of the framing, from the report’s intro:

AI-driven automation will continue to create wealth and expand the American economy in the coming years, but, while many will benefit, that growth will not be costless and will be accompanied by changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.

This cuts right to the chase: Artificial intelligence (AI) will create wealth, and it will replace jobs. AI will change the future of work, and the economy.

AI will change the future of work, and the economy.

Revisiting this report made me wonder if similar policy research exists in Germany and at the European level? A quick search online brought up bits and pieces (Merkel arguing for bundling know-how for AI and acknowledging that AI spending is low in Europe, demands for transparency in algorithms). However, there doesn’t seem to be an overarching guiding policy. (I asked federal government spokesperson Steffen Seibert on Twitter, but so far he hasn’t responded. Which is fair—why would he!)

Germany has a mixed track record of tech policy

For the record: In other areas, Germany is making good progress. Take autonomous driving, for example. Germany just adopted an action plan on automated driving that regulates key points of how autonomous vehicles should behave on the street—and regulates it well! Key points include that autonomous driving is worth promoting because it causes fewer accidents, dictates that damage to property must take precedence over personal injury (aka life has priority), and that in unavoidable accident situations there may not be any discrimination between individuals based on age, gender, etc. It even includes data sovereignty for drivers. Well done!

On the other hand, for the Internet of Things (IoT) Germany squandered opportunities in that IoT is framed almost exclusively as industrial IoT under the banner of Industrie 4.0. This is understandable given Germany’s manufacturing-focused economy, but it excludes a huge amount of super interesting and promising IoT. It’s clearly the result of successful lobbying but at the expense at a more inclusive, diverse portfolio of opportunities.

So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.

So where do we stand with artificial intelligence in Germany? Honestly, in terms of policy I cannot tell.

Update: The Federal Ministry of Education and Research recently announced an initiative to explore AI: Plattform Lernende Systeme (“platform living systems”). Thanks to Christian Katzenbach for the pointer!

AI & the future of work

The White House AI report talks a lot about the future of work, and of employment specifically. This makes sense: It’s one of the key aspects of AI. (Some others are, I’d say, opportunity for the creation of wealth on one side and algorithmic discrimination on the other.)

How AI will impact the work force, the economy, and the role of the individual is something we can only speculate about today.

In a recent workshop with stipendiaries of the Heinrich-Böll-Foundation on the future of work we explored how digital, AI, IoT and adjacent technologies impact how we work, and how we think about work. It was super interesting to see this diverse group of very, very capable students and young professionals bang their heads against the complexities in this space. Their findings mirrored what experts across the field also have been finding: That there are no simple answers, and most likely we’ll see huge gains in some areas and huge losses in others.

Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity.

The one thing I’d say is a safe bet is this: Like all automation before, depending on the context we’ll see AI either displace human workers or increase their productivity. In other words, some human workers will be super-powered by AI (and related technologies), whereas others will fall by the wayside.

Over on Ribbonfarm, Venkatesh Rao phrases this very elegantly: Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.” Which of course conjures to mind images of roboticized warehouses, like this one:

Just to be clear, this is a contemporary warehouse in China. Amazon runs similar operations. This isn’t the future, this is the well-established present.

Future jobs will either be placed above or below the API: “You either tell robots what to do, or are told by robots what to do.”

I’d like to stress that I don’t think a robot warehouse is inherently good or bad. It depends on the policies that make sure the humans in the picture do well.

Education is key

So where are we in Europe again? In Germany, we still try to define what IoT and AI means. In China it’s been happening for years.

This picture shows a smart lamp in Shenzhen that we found in a maker space:

What does the lamp do? It tracks if users are nearby, so it can switch itself off when nobody’s around. It automatically adjusts light the temperature depending on the light in the room. As smart lamps go, these features are okay: Not horrible, not interesting. If it came out of Samsung or LG or Amazon I wouldn’t be surprised.

So what makes it special? This smart lamp was built by a group of fifth graders. That’s right: Ten and eleven year olds designed, programmed, and built this. Because the curriculum for local students includes the skills that enable them to do this. In Europe, this is unheard of.

I think the gap in skills regarding artificial intelligence is most likely quite similar. And I’m not just talking about the average individual: I’m talking about readiness at the government level, too. Our governments aren’t ready for AI.

Our governments aren’t ready for AI.

It’s about time we start getting ready for AI, IoT, and robotics. Always a fast mover, Estonia considers a law to legalize AI, and they smartly kick off this process with a multi-stakeholder process.

What to do?

In Germany, the whole discussion is still in its earliest stages. Let’s not fall into the same trap as we did for IoT: Both IoT and AI are more than just industry. They are both broader and deeper than the adjective industrial implies.

The White House report can provide some inspiration, especially around education policy.

We need to invest in what OECD calls the active labor market policies, i.e. training and skill development for adults. We need to update our school curricula to get youths ready for the future with both hands-on applicable skills (coding, data analysis, etc.) and with the larger contextual meta skills to make smart decisions (think humanities, history, deep learning).

We need to reform immigration to allow for the best talent to come to Europe more easily (and allow for voting rights, too, because nobody feels at home where they pay taxes with no representation).

Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.

Zooming out to the really big picture, we need to start completely reforming our social security systems for an AI world that might not deliver full employment ever again. This could include Universal Basic Income, or maybe rather Universal Basic Services, or a different approach altogether.

This requires capacity building on the side of our government. Without capacity building, we’ll never see the digital transformation we need to get ready for the 21st century.

But I know one thing: We need to kick off this process today.

///

Please note: This is cross-posted from Medium.