CategoryAI

Trust Indicators for Emerging Technologies

T

For the Trustable Technology Mark, we identified 5 dimensions that indicate trustworthiness. Let’s call them trust indicators:

  • Privacy & Data Practices: Does it respect users’ privacy and protect their data rights?
  • Transparency: Is it clear to users what the device and the underlying services do and are capable of doing?
  • Security: Is the device secure and safe to use? Are there safeguards against data leaks and the like?
  • Stability: How long a life cycle can users expect from the device, and how robust are the underlying services? Will it continue to work if the company gets acquired, goes belly-up, or stops maintenance?
  • Openness: Is it built on open source or around open data, and/or contributes to open source or open data? (Note: We treat Openness not as a requirement for consumer IoT but as an enabler of trustworthiness.)

Now these 5 trust indicators—and the questions we use in the Trustable Technology Mark to assess them—are designed for the context of consumer products. Think smart home devices, fitness trackers, connected speakers or light bulbs. They work pretty well for that context.

Over the last few months, it has become clear that there’s demand for similar trust indicators for areas other than consumer products like smart cities, artificial intelligence, and other areas of emerging technology.

I’ve been invited to a number of workshops and meetings exploring those areas, often in the context of policy making. So I want to share some early thoughts on how we might be able to translate these trust indicators from a consumer product context to these other areas. Please note that the devil is in the detail: This is early stage thinking, and the real work begins at the stage where the assessment questions and mechanisms are defined.

The main difference between consumer context and publicly deployed technology—infrastructure!—means that we need to focus even most strongly on safeguards, inclusion, and resilience. If consumer goods stop working, there’s real damage, like lost income and the like, but in the bigger picture, failing consumer goods are mostly a quality of life issue; and in the case of consumer IoT space, mostly for the affluent. (Meaning that if we’re talking about failure to operate rather than data leaks, the damage has a high likelihood of being relatively harmless.)

For publicly deployed infrastructure, we are looking at a very different picture with vastly different threat models and potential damage. Infrastructure that not everybody can rely on—equally, and all the time—would not just be annoying, it might be critical.

After dozens of conversations with people in this space, and based on the research I’ve been doing both for the Trustable Technology Mark and my other work with both ThingsCon and The Waving Cat, here’s a snapshot of my current thinking. This is explicitly intended to start a debate that can inform policy decisions for a wide range of areas where emerging technologies might play a role:

  • Privacy & Data Practices: Privacy and good data protection practices are as essential in public space as in the consumer space, even though the implications and tradeoffs might be different ones.
  • Transparency & Accountability: Transparency is maybe even more relevant in this context, and I propose adding Accountability as an equally important aspect. This holds especially true where commercial enterprises install and possibly maintain large scale networked public infrastructure, like in the context of smart cities.
  • Security: Just as important, if not more so.
  • Resilience: Especially for smart cities (but I imagine the same holds true for other areas), we should optimize for Resilience. Smart city systems need to work, even if parts fail. Decentralization, openness, interoperability and participatory processes are all strategies that can increase Resilience.
  • Openness: Unlike in the consumer space, I consider openness (open source, open data, open access) essential in networked public infrastructure—especially smart city technology. This is also a foundational building block for civic tech initiatives to be effective.

There are inherent conflicts and tradeoffs between these trust indicators. But **if we take them as guiding principles to discuss concrete issues in their real contexts, I believe they can be a solid starting point. **

I’ll keep thinking about this, and might adjust this over time. In the meantime, I’m keen to hear what you think. If you have thoughts to share, drop me a line or hit me up on Twitter.

“The world doesn’t know where it wants to go”

&

Image: Compass by Valentin Antonucci (Unsplash) Image: Compass by Valentin Antonucci (Unsplash)

One of the joys of my working at the intersection of emerging tech and its impact is that I get to discuss things that are by definition cutting edge with people from entirely different backgrounds—like recently with my dad. He’s 77 years old and has a background in business, not tech.

We chatted about IoT, and voice-enabled connected devices, and the tradeoffs they bring between convenience and privacy. How significant chunks of the internet of things are optimized for costs at the expense of privacy and security. How IoT is, by and large, a network of black boxes.

When I tried to explain why I think we need a trustmark for IoT (which I’m building with ThingsCon and as a Mozilla fellow)—especially regarding voice-enabled IoT—he listened intently, thought about it for a moment, and then said:

“We’re at a point in time where the world doesn’t know where it wants to go.”

And somehow that exactly sums it up, ever so much more eloquently than I could have phrased it.

Only I’m thinking: Even though I can’t tell where the world should be going, I think I know where to plant our first step—and that is, towards a more transparent and trustworthy IoT. I hope the trustmark can be our compass.

Monthnotes for March 2018

M

Before we’re headed into the long Easter Holiday weekend, a quick rundown of what happened in March.

Mozilla Fellowship & an open trustmark for IoT

I’m happy to share that I’ve joined the Mozilla Fellows program (concretely, the IoT fellows group to work with Jon Rogers and Julia Kloiber), and that Mozilla supports the development of an open trustmark for IoT under the ThingsCon umbrella.

There’s no doubt going to be a more formal announcement soon, but here’s the shortest of blog posts over on ThingsCon.

(As always, a full disclosure: My partner works for Mozilla.)

I had already shared first thoughts on the IoT trustmark. We’ll have a lot more to share on the development of the trustmark now that it’s becoming more official. You can follow along here and over on the ThingsCon blog.

By the way, this needs a catchy name. Hit me up if you have one in mind we could use!

Zephyr interviews: The Craftsman, Deutsche Welle

We were humbled and delighted that Gianfranco Chicco covered Zephyr Berlin in the recent issue of his most excellent newsletter, The Craftsman. Links and some background here.

We also had an interview with Deutsche Welle. We’ll share it once it’s available online.

It’s great that this little passion project of ours is getting this attention, and truly humbled also by the super high quality feedback and engagement from our customers. What a lovely crowd! ?

Learning about Machine Learning

I’ve started Andrew Ng’s Machine Learning Stanford course on Coursera. Due to time constraints it’s slow going for me, and as expected, it’s a bit math heavy for my personal taste but even if you don’t aim to necessarily implement any machine learning or code to that effect there’s a lot to take away. Two thumbs up.

Notes from a couple of events on responsible tech

Aspen Institute: I was kindly invited to an event by Aspen Institute Germany about the impact of AI on society and humanity. One panel stood out to me: It was about AI in the context of autonomous weapons systems. I was positively surprised to hear that

  1. All panelists agreed that if autonomous weapons systems, then only with humans in the loop.
  2. There haven’t been significant cases of rogue actors deploying autonomous weapons, which strikes me as good to hear but also very surprising.
  3. A researcher from the Bundeswehr University Munich pointed out that introducing autonomous systems introduces instability, pointing out the possibility of flash wars triggered by fully autonomous systems interacting with one another (like flash crashes in stock markets).
  4. In the backend of military logistics, machine learning appears to already be a big deal.

Digital Asia Hub & HiiG: Malavika Jayaram kindly invited me to a small workshop with Digital Asia Hub and the Humboldt Institute for Internet and Society (in the German original abbreviated as HiiG). It was part of a fact finding trip to various regions and tech ecosystems to figure out which items are most important from a regulatory and policy perspective, and to feed the findings from these workshops into policy conversations in the APAC region. This was super interesting, especially because of the global input. I was particularly fascinated to see that Berlin hosts all kinds of tech ethics folks, some of which I knew and some of which I didn’t, so that’s cool.

Both are also covered in my newsletter, so I won’t just replicate everything here. You can dig into the archives from the last few weeks.

Thinking & writing

Season 3 of my somewhat more irreverent newsletter, Connection Problem, is coming up on its 20th issue. You can sign up here to see where my head is these days.

If you’d like to work with me in the upcoming months, I have very limited availability but happy to have a chat.

That’s it for today. Have a great Easter weekend and an excellent April!

The key challenge for the industry in the next 5 years is consumer trust

T

Note: Every quarter or so I write our client newsletter. This time it touched on some aspects I figured might be useful to this larger audience, too, so I trust you’ll forgive me cross-posting this bit from the most recent newsletter.

Some questions I’ve been pondering and that we’ve been exploring in conversations with our peer group day in, day out.

This isn’t an exhaustive list, of course, but gives you a hint about my headspace?—?experience shows that this can serve as a solid early warning system for industry wide debates, too. Questions we’ve had on our collective minds:

1. What’s the relationship between (digital) technology and ethics/sustainability? There’s a major shift happening here, among consumers and industry, but I’m not yet 100% sure where we’ll end up. That’s a good thing, and makes for interesting questions. Excellent!

2. The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust. Between all the insecurities and data leaks and bricked devices and “sunsetted” services and horror stories about hacked toys and routers and cameras and vibrators and what have you, I’m 100% convinced that consumer trust?—?and products’ trustworthiness?—?is the key to success for the next 5 years of IoT. (We’ve been doing lots of work in that space, and hope to continue to work on this in 2018.)

3. Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting? It seems safe to assume that as deploying machine learning gets easier and cheaper every day we’ll see AI-like techniques thrown at every imaginable niche. Remember when everyone and their uncle had to have an app? It’s going to be like that but with AI. This is going to be interesting, and no doubt it’ll produce spectacular successes as well as fascinating failures.

4. What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go?

These are all interesting, deep topics to dig into. They’re all closely interrelated, too, and have implications on business, strategy, research, policy. We’ll continue to dig in.

But also, besides these larger, more complex questions there are smaller, more concrete things to explore:

  • What are new emerging technologies? Where are exciting new opportunities?
  • What will happen due to more ubiquitous autonomous vehicles, solar power, crypto currencies? What about LIDAR and Li-Fi?
  • How will the industry adapt to the European GDPR? Who will be the first players to turn data protection and scarcity into a strength, and score major wins? I’m convinced that going forward, consumer and data protection offer tremendous business opportunities.

If these themes resonate, or if you’re asking yourself “how can we get ahead in 2018 without compromising user rights”, let’s chat.

Want to work together? I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.

PS: I write another newsletter, too, in which I share regular project updates, thoughts on the most interesting articles I come across, and where I explore areas around tech, society, culture & business that I find relevant. To watch my thinking unfolding and maturing, this is for you. You can subscribe here.

Focus areas over time

F

The end of the year is a good time to look back and take stock, and one of the things I’ve been looking at especially is how the focus of my work has been shifting over the years.

I’ve been using the term emerging technologies to describe where my interests and expertise are, because it describes clearly that the concrete focus is (by definition!) constantly evolving. Frequently, the patterns become obvious only in hindsight. Here’s how I would describe the areas I focused on primarily over the last decade or so:

focus areas over time Focus areas over time (Image: The Waving Cat)

Now this isn’t a super accurate depiction, but it gives a solid idea. I expect the Internet of Things to remain a priority for the coming years, but it’s also obvious that algorithmic decision-making and its impact (labeled here as artificial intelligence) is gaining importance, and quickly. The lines are blurry to begin with.

It’s worth noting that these timelines aren’t absolutes, either: I’ve done work around the implications of social media later than that, and work on algorithms and data long before. These labels indicated priorities and focus more than anything.

So anyway, hope this is helpful to understand my work. As always, if you’d like to bounce ideas feel free to ping me.

AI, IoT, Robotics: We’re at an inflection point for emerging technologies

A

The UAE appointed a minister for AI. Saudia Arabia announced plans to build a smart city 33x the size of New York City, and granted “citizenship” to a robot.

These are just some of the emerging tech news that crossed my screen within the last few hours. And it’s a tiny chunk. Meanwhile a couple weeks ago or so, a group of benevolent technologists gathered in Norway, in the set of Ex Machina to discuss futures of AI.

In my work and research, AI and robotics have shot up to the top right alongside (all of a sudden seemingly much more tame-seeming) IoT.

This tracks completely with an impression I’ve had for the last couple of months: It seems we’re at some kind of inflection point. Out of the many parts of the puzzle called “emerging tech meets society”, the various bits and pieces have started linking up for real, and a clearer shape is emerging.

We know (most of) the major actors: The large tech companies (GAFAM & co) and their positions; which governments embrace—or shy away from—various technologies, approaches, and players; even civil society has been re-arranging itself.

We see old alliance break apart, and new ones emerging. It’s still very, very fluid. But the opening moves are now behind us, and the game is afoot.

The next few years will be interesting, that much is certain. Let’s get them right. There’s lots to be done.