TagAI

Cost-benefit analysis, Data-Driven Infrastructure edition

C

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:04.

It’s a common approach for making (business, policy…) decision by performing a cost-benefit analysis of some sort. Sometimes this is done via a rigorous process, sometimes it’s ballparked — and depending on the context, that’s OK.

One thing is pretty constant: In a cost-benefit analysis you traditionally work on the basis of reasonably expected costs and reasonably expected benefits. If the benefits outweigh the costs, green light.

Now, I’d argue that for data-driven infrastructure(-ish) projects, we need to set a higher bar.

By data-driven infrastructure I mean infrastructure(ish) things like digital platforms, smart city projects, etc. that collect data, process data, feed into or serve as AI or algorithmic decision-making (ADM) systems, etc. This may increasingly include what’s traditionally included under the umbrella of critical infrastructure but extends well beyond.

For this type of data-driven infrastructure (DDI), we need a different balance. Or, maybe even better, we need a more thorough understanding of what can be reasonably expected.

I argue that for DDI, guaranteed improvement must outweigh the worst case scenario risks.

If the last decade has shown us anything, it’s that data-driven infrastructure will be abused to its full potential.

From criminals to commercial and governmental actors, from legitimate and rogue, if there is valuable data then we’ll see strong interests in this honey pot of data. Hence, we need to assume at least some of those actors will get access to it. So whatever could happen when they do — which would differ dramatically depending on which types or which combination of types of actors does, obviously — is what we have to factor in. Also, the opportunity cost and expertise drain and newly introduced dependencies that come with vendor lock-in.

All of this — that level of failure — should be the new “reasonable” expectation on the cost side.

But in order to make semantic capture of the term “reasonable” a little bit harder, I’m proposing to be very explicit about what we mean by this:

So instead of “Let’s compare what happens if things go kinda-sorta OK on the benefit side and only go kinda-sorta wrong on the cost side”, let’s say “the absolutely guaranteed improvements on the benefit side must significantly outweigh the worst case failure modes on the costs side.”

For DDI, let’s work with aggressive-pessimistic scenarios for the costs/risk side, and conservative scenarios for the benefit side. The more critical the infrastructure, the more thorough we need to be.

That should make for a much more interesting debate, and certainly for more insightful scenario planning.

Nuclear disarmament but for surveillance tech?

N

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:03.

Surely, the name Clearview AI has crossed your radar these last couple of weeks since the NYTimes has published this profile. In a nutshell, Clearview is a small company that has built a facial recognition system that is seeded with a ton of faces that were scraped from the big social media sites, and builds from there through user uploads. Take a photo of a person, see all their matches in photos, social media sites, etc. etc.

It’s clearly a surveillance machine, and just as clearly doesn’t bother with things like consent.

Maybe unsurprisingly, the main customer base is law enforcement, even though Clearview is entirely proprietary (i.e. no oversight of how it works), unstudied (there’s been no meaningful research to examine essential things like how many false positives the service generates) and quite possibly illegal (no consent from most people in that database).

The thing is: It’s now pretty simple to build something like this. So we’ll see many more just like it, unless we do something about it.

In other areas, as a society we recognized a disproportionate risk and built regulatory and enforcement mechanisms to prevent or at least manage those risks. Sometimes this works better than at other times, but this is how we got to nuclear disarmament, for example. Not a perfect system for sure, but it’s been kinda-sorta working. And kinda-sorta is sometimes really all you can hope for. At least we haven’t seen any rogue nuclear warheads explode in all the years since 1945 — so that’s a success.

Now, facial recognition isn’t as immediately deadly as a nuclear bomb. But it’s very, very bad at scale. The potential for abuse is consistently so big that we might as well ban it outright. And where the immediate danger is lower than that from a nuclear bomb, the barrier to entry is just that much lower: The tooling and the knowledge is there, the training data is there, it’s near-trivial to launch this type of product now. So we have to expect this to be a constant pretty bad threat any day for the rest of our lives.

So what mechanisms do we have to mitigate those risks? I argue we need an outright ban, also and especially in security contexts. GDPR and its mechanisms for requiring consent point in the right direction. The EU’s AI strategy (as per the recently leaked documents) consider such a ban, but with a (silly, really) exception for use in security contexts.

But let’s look at this and consider the real implications of facial recognition as a category: Otherwise, we’ll be playing whack-a-mole for decades.

Trust Indicators for Emerging Technologies

T

For the Trustable Technology Mark, we identified 5 dimensions that indicate trustworthiness. Let’s call them trust indicators:

  • Privacy & Data Practices: Does it respect users’ privacy and protect their data rights?
  • Transparency: Is it clear to users what the device and the underlying services do and are capable of doing?
  • Security: Is the device secure and safe to use? Are there safeguards against data leaks and the like?
  • Stability: How long a life cycle can users expect from the device, and how robust are the underlying services? Will it continue to work if the company gets acquired, goes belly-up, or stops maintenance?
  • Openness: Is it built on open source or around open data, and/or contributes to open source or open data? (Note: We treat Openness not as a requirement for consumer IoT but as an enabler of trustworthiness.)

Now these 5 trust indicators—and the questions we use in the Trustable Technology Mark to assess them—are designed for the context of consumer products. Think smart home devices, fitness trackers, connected speakers or light bulbs. They work pretty well for that context.

Over the last few months, it has become clear that there’s demand for similar trust indicators for areas other than consumer products like smart cities, artificial intelligence, and other areas of emerging technology.

I’ve been invited to a number of workshops and meetings exploring those areas, often in the context of policy making. So I want to share some early thoughts on how we might be able to translate these trust indicators from a consumer product context to these other areas. Please note that the devil is in the detail: This is early stage thinking, and the real work begins at the stage where the assessment questions and mechanisms are defined.

The main difference between consumer context and publicly deployed technology—infrastructure!—means that we need to focus even most strongly on safeguards, inclusion, and resilience. If consumer goods stop working, there’s real damage, like lost income and the like, but in the bigger picture, failing consumer goods are mostly a quality of life issue; and in the case of consumer IoT space, mostly for the affluent. (Meaning that if we’re talking about failure to operate rather than data leaks, the damage has a high likelihood of being relatively harmless.)

For publicly deployed infrastructure, we are looking at a very different picture with vastly different threat models and potential damage. Infrastructure that not everybody can rely on—equally, and all the time—would not just be annoying, it might be critical.

After dozens of conversations with people in this space, and based on the research I’ve been doing both for the Trustable Technology Mark and my other work with both ThingsCon and The Waving Cat, here’s a snapshot of my current thinking. This is explicitly intended to start a debate that can inform policy decisions for a wide range of areas where emerging technologies might play a role:

  • Privacy & Data Practices: Privacy and good data protection practices are as essential in public space as in the consumer space, even though the implications and tradeoffs might be different ones.
  • Transparency & Accountability: Transparency is maybe even more relevant in this context, and I propose adding Accountability as an equally important aspect. This holds especially true where commercial enterprises install and possibly maintain large scale networked public infrastructure, like in the context of smart cities.
  • Security: Just as important, if not more so.
  • Resilience: Especially for smart cities (but I imagine the same holds true for other areas), we should optimize for Resilience. Smart city systems need to work, even if parts fail. Decentralization, openness, interoperability and participatory processes are all strategies that can increase Resilience.
  • Openness: Unlike in the consumer space, I consider openness (open source, open data, open access) essential in networked public infrastructure—especially smart city technology. This is also a foundational building block for civic tech initiatives to be effective.

There are inherent conflicts and tradeoffs between these trust indicators. But **if we take them as guiding principles to discuss concrete issues in their real contexts, I believe they can be a solid starting point. **

I’ll keep thinking about this, and might adjust this over time. In the meantime, I’m keen to hear what you think. If you have thoughts to share, drop me a line or hit me up on Twitter.

Monthnotes for March 2018

M

Before we’re headed into the long Easter Holiday weekend, a quick rundown of what happened in March.

Mozilla Fellowship & an open trustmark for IoT

I’m happy to share that I’ve joined the Mozilla Fellows program (concretely, the IoT fellows group to work with Jon Rogers and Julia Kloiber), and that Mozilla supports the development of an open trustmark for IoT under the ThingsCon umbrella.

There’s no doubt going to be a more formal announcement soon, but here’s the shortest of blog posts over on ThingsCon.

(As always, a full disclosure: My partner works for Mozilla.)

I had already shared first thoughts on the IoT trustmark. We’ll have a lot more to share on the development of the trustmark now that it’s becoming more official. You can follow along here and over on the ThingsCon blog.

By the way, this needs a catchy name. Hit me up if you have one in mind we could use!

Zephyr interviews: The Craftsman, Deutsche Welle

We were humbled and delighted that Gianfranco Chicco covered Zephyr Berlin in the recent issue of his most excellent newsletter, The Craftsman. Links and some background here.

We also had an interview with Deutsche Welle. We’ll share it once it’s available online.

It’s great that this little passion project of ours is getting this attention, and truly humbled also by the super high quality feedback and engagement from our customers. What a lovely crowd! ?

Learning about Machine Learning

I’ve started Andrew Ng’s Machine Learning Stanford course on Coursera. Due to time constraints it’s slow going for me, and as expected, it’s a bit math heavy for my personal taste but even if you don’t aim to necessarily implement any machine learning or code to that effect there’s a lot to take away. Two thumbs up.

Notes from a couple of events on responsible tech

Aspen Institute: I was kindly invited to an event by Aspen Institute Germany about the impact of AI on society and humanity. One panel stood out to me: It was about AI in the context of autonomous weapons systems. I was positively surprised to hear that

  1. All panelists agreed that if autonomous weapons systems, then only with humans in the loop.
  2. There haven’t been significant cases of rogue actors deploying autonomous weapons, which strikes me as good to hear but also very surprising.
  3. A researcher from the Bundeswehr University Munich pointed out that introducing autonomous systems introduces instability, pointing out the possibility of flash wars triggered by fully autonomous systems interacting with one another (like flash crashes in stock markets).
  4. In the backend of military logistics, machine learning appears to already be a big deal.

Digital Asia Hub & HiiG: Malavika Jayaram kindly invited me to a small workshop with Digital Asia Hub and the Humboldt Institute for Internet and Society (in the German original abbreviated as HiiG). It was part of a fact finding trip to various regions and tech ecosystems to figure out which items are most important from a regulatory and policy perspective, and to feed the findings from these workshops into policy conversations in the APAC region. This was super interesting, especially because of the global input. I was particularly fascinated to see that Berlin hosts all kinds of tech ethics folks, some of which I knew and some of which I didn’t, so that’s cool.

Both are also covered in my newsletter, so I won’t just replicate everything here. You can dig into the archives from the last few weeks.

Thinking & writing

Season 3 of my somewhat more irreverent newsletter, Connection Problem, is coming up on its 20th issue. You can sign up here to see where my head is these days.

If you’d like to work with me in the upcoming months, I have very limited availability but happy to have a chat.

That’s it for today. Have a great Easter weekend and an excellent April!

The key challenge for the industry in the next 5 years is consumer trust

T

Note: Every quarter or so I write our client newsletter. This time it touched on some aspects I figured might be useful to this larger audience, too, so I trust you’ll forgive me cross-posting this bit from the most recent newsletter.

Some questions I’ve been pondering and that we’ve been exploring in conversations with our peer group day in, day out.

This isn’t an exhaustive list, of course, but gives you a hint about my headspace?—?experience shows that this can serve as a solid early warning system for industry wide debates, too. Questions we’ve had on our collective minds:

1. What’s the relationship between (digital) technology and ethics/sustainability? There’s a major shift happening here, among consumers and industry, but I’m not yet 100% sure where we’ll end up. That’s a good thing, and makes for interesting questions. Excellent!

2. The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust. Between all the insecurities and data leaks and bricked devices and “sunsetted” services and horror stories about hacked toys and routers and cameras and vibrators and what have you, I’m 100% convinced that consumer trust?—?and products’ trustworthiness?—?is the key to success for the next 5 years of IoT. (We’ve been doing lots of work in that space, and hope to continue to work on this in 2018.)

3. Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting? It seems safe to assume that as deploying machine learning gets easier and cheaper every day we’ll see AI-like techniques thrown at every imaginable niche. Remember when everyone and their uncle had to have an app? It’s going to be like that but with AI. This is going to be interesting, and no doubt it’ll produce spectacular successes as well as fascinating failures.

4. What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go?

These are all interesting, deep topics to dig into. They’re all closely interrelated, too, and have implications on business, strategy, research, policy. We’ll continue to dig in.

But also, besides these larger, more complex questions there are smaller, more concrete things to explore:

  • What are new emerging technologies? Where are exciting new opportunities?
  • What will happen due to more ubiquitous autonomous vehicles, solar power, crypto currencies? What about LIDAR and Li-Fi?
  • How will the industry adapt to the European GDPR? Who will be the first players to turn data protection and scarcity into a strength, and score major wins? I’m convinced that going forward, consumer and data protection offer tremendous business opportunities.

If these themes resonate, or if you’re asking yourself “how can we get ahead in 2018 without compromising user rights”, let’s chat.

Want to work together? I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.

PS: I write another newsletter, too, in which I share regular project updates, thoughts on the most interesting articles I come across, and where I explore areas around tech, society, culture & business that I find relevant. To watch my thinking unfolding and maturing, this is for you. You can subscribe here.

Focus areas over time

F

The end of the year is a good time to look back and take stock, and one of the things I’ve been looking at especially is how the focus of my work has been shifting over the years.

I’ve been using the term emerging technologies to describe where my interests and expertise are, because it describes clearly that the concrete focus is (by definition!) constantly evolving. Frequently, the patterns become obvious only in hindsight. Here’s how I would describe the areas I focused on primarily over the last decade or so:

focus areas over time Focus areas over time (Image: The Waving Cat)

Now this isn’t a super accurate depiction, but it gives a solid idea. I expect the Internet of Things to remain a priority for the coming years, but it’s also obvious that algorithmic decision-making and its impact (labeled here as artificial intelligence) is gaining importance, and quickly. The lines are blurry to begin with.

It’s worth noting that these timelines aren’t absolutes, either: I’ve done work around the implications of social media later than that, and work on algorithms and data long before. These labels indicated priorities and focus more than anything.

So anyway, hope this is helpful to understand my work. As always, if you’d like to bounce ideas feel free to ping me.

AI, IoT, Robotics: We’re at an inflection point for emerging technologies

A

The UAE appointed a minister for AI. Saudia Arabia announced plans to build a smart city 33x the size of New York City, and granted “citizenship” to a robot.

These are just some of the emerging tech news that crossed my screen within the last few hours. And it’s a tiny chunk. Meanwhile a couple weeks ago or so, a group of benevolent technologists gathered in Norway, in the set of Ex Machina to discuss futures of AI.

In my work and research, AI and robotics have shot up to the top right alongside (all of a sudden seemingly much more tame-seeming) IoT.

This tracks completely with an impression I’ve had for the last couple of months: It seems we’re at some kind of inflection point. Out of the many parts of the puzzle called “emerging tech meets society”, the various bits and pieces have started linking up for real, and a clearer shape is emerging.

We know (most of) the major actors: The large tech companies (GAFAM & co) and their positions; which governments embrace—or shy away from—various technologies, approaches, and players; even civil society has been re-arranging itself.

We see old alliance break apart, and new ones emerging. It’s still very, very fluid. But the opening moves are now behind us, and the game is afoot.

The next few years will be interesting, that much is certain. Let’s get them right. There’s lots to be done.