TagAI

New Report: “Towards a European AI & Society Ecosystem”

N

I’m happy to share that a new report I had the joy and privilege to co-author with Leonie Beining and Stefan Heumann (both of Stiftung Neue Verantwortung) just came out. It’s titled:

“Towards a European AI & Society Ecosystem”

I’m including the executive summary below, you can find the full report here. The report is co-produced by Stiftung Neue Verantwortung and ThingsCon.

Here’s the executive summary:

Artificial Intelligence (AI) has emerged as a key technology that has gripped the attention of governments around the globe. The European Commission has made AI leadership a top priority. While seeking to strengthen research and commercial deployment of AI, Europe has also embraced the role of a global regulator of technology, and is currently the only region where a regulatory agenda on AI rooted in democratic values – as opposed than purely market or strategic terms – can be credibly formulated. And given the size of the EU’s internal market, this can be done with a reasonable potential for global impact. However, there is a gap between Europe’s lofty ambitions and its actual institutional capacity for research, analysis and policy development to define and shape the European way on AI guided by societal values and the public interest. Currently the debate is mostly driven by industry, where most resources and capacity for technical research are located. European civil society organizations that study and address the social, political and ethical challenges of AI are not sufficiently consulted and struggle to have an impact on the policy debate. Thus, the EU’s regulatory ambition faces a serious problem: If Europe puts societal interests and values at the center of its approach towards AI, it requires robust engagement and relationships between governments and many diverse actors from civil society. Otherwise any claims regarding human-centric and trustworthy AI would come to nothing.

Therefore, EU policy-making capacity must be supported by a broader ecosystem of stakeholders and experts especially from civil society. This AI & Society Ecosystem, a subset of a broader AI Ecosystem that also includes industry actors, is essential in informing policy-making on AI, as well as holding the government to its self-proclaimed standard of promoting AI in the interest of society at large. We propose the ecosystem perspective, originating from biology and already applied in management and innovation studies (also with regard to AI). It captures the need for diversity of actors and expertise, directs the attention to synergies and connections, and puts the focus on the capacity to produce good outcomes over time. We argue that such a holistic perspective is urgently needed if the EU wants to fulfil its ambitions regarding trustworthy AI. The report aims to draw attention to the role of government actors and foundations in strengthening the AI & Society Ecosystem.

The report identifies ten core functions, or areas of expertise, that an AI & Society Ecosystem needs to be able to perform – ten areas of expertise where the ecosystem can contribute meaningfully to the policy debate: Policy, technology, investigation, and watchdog expertise; Expertise in strategic litigation, and in building public interest use cases of AI; Campaign and outreach, and research expertise; Expertise in promoting AI literacy and education; and sector-specific expertise. In a fully flourishing ecosystem these functions need to be connected in order to complement each other and benefit from each other.

The core ingredients needed for a strong AI & Society Ecosystem already exist: Europe can build on strengths like a strong tradition of civil society expertise and advocacy, and has a diverse field of digital rights organizations that are building AI expertise. It has strong public research institutions and academia, and a diverse media system that can engage a wider public in a debate around AI. Furthermore, policy-makers have started to acknowledge the role of civil society for the development of AI, and we see new funding opportunities from foundations and governments that prioritize the intersection of AI and society.

There are also clear weaknesses and challenges that the Ecosystem has to overcome: Many organizations lack the resources to build the necessary capacity, and there is little access to independent funding. Fragmentation across Europe lowers the visibility and impact of individual actors. We see a lack of coordination between civil society organizations weakening the the AI & Society Ecosystem as a whole. In policy-making there is a lack of real multi-stakeholder engagement and civil society actors often do not have sufficient access to the relevant processes. Furthermore, the lack of transparency on where and how AI systems are being used put additional burden on civil society actors engaging in independent research, policy and advocacy work.

Governments and foundations play a strong role for the development of a strong and impactful AI & Society Ecosystem in Europe. They provide not only important sources of funding on which AI & Society organizations depend. They are also themselves important actors within that ecosystem, and hence have other types of non-monetary support to offer. Policy-makers can, for example, lower barriers to participation and engagement for civil society. They can also create new resources for civil society, e.g. by encouraging NGOs to participate in government funded research or by designing grants especially with small organizations in mind. Foundations shape the ecosystem through broader support including aspects such as providing training and professional development. Furthermore, foundations are in the position to act as convener and to build bridges between different actors that are necessary in a healthy ecosystem. They are also needed to fill funding gaps for functions within the ecosystem, especially where government funding is hard or impossible to obtain. Overall, in order to strengthen the ecosystem, two approaches come into focus: managing relationships and managing resources.

Cost-benefit analysis, Data-Driven Infrastructure edition

C

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:04.

It’s a common approach for making (business, policy…) decision by performing a cost-benefit analysis of some sort. Sometimes this is done via a rigorous process, sometimes it’s ballparked — and depending on the context, that’s OK.

One thing is pretty constant: In a cost-benefit analysis you traditionally work on the basis of reasonably expected costs and reasonably expected benefits. If the benefits outweigh the costs, green light.

Now, I’d argue that for data-driven infrastructure(-ish) projects, we need to set a higher bar.

By data-driven infrastructure I mean infrastructure(ish) things like digital platforms, smart city projects, etc. that collect data, process data, feed into or serve as AI or algorithmic decision-making (ADM) systems, etc. This may increasingly include what’s traditionally included under the umbrella of critical infrastructure but extends well beyond.

For this type of data-driven infrastructure (DDI), we need a different balance. Or, maybe even better, we need a more thorough understanding of what can be reasonably expected.

I argue that for DDI, guaranteed improvement must outweigh the worst case scenario risks.

If the last decade has shown us anything, it’s that data-driven infrastructure will be abused to its full potential.

From criminals to commercial and governmental actors, from legitimate and rogue, if there is valuable data then we’ll see strong interests in this honey pot of data. Hence, we need to assume at least some of those actors will get access to it. So whatever could happen when they do — which would differ dramatically depending on which types or which combination of types of actors does, obviously — is what we have to factor in. Also, the opportunity cost and expertise drain and newly introduced dependencies that come with vendor lock-in.

All of this — that level of failure — should be the new “reasonable” expectation on the cost side.

But in order to make semantic capture of the term “reasonable” a little bit harder, I’m proposing to be very explicit about what we mean by this:

So instead of “Let’s compare what happens if things go kinda-sorta OK on the benefit side and only go kinda-sorta wrong on the cost side”, let’s say “the absolutely guaranteed improvements on the benefit side must significantly outweigh the worst case failure modes on the costs side.”

For DDI, let’s work with aggressive-pessimistic scenarios for the costs/risk side, and conservative scenarios for the benefit side. The more critical the infrastructure, the more thorough we need to be.

That should make for a much more interesting debate, and certainly for more insightful scenario planning.

Nuclear disarmament but for surveillance tech?

N

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:03.

Surely, the name Clearview AI has crossed your radar these last couple of weeks since the NYTimes has published this profile. In a nutshell, Clearview is a small company that has built a facial recognition system that is seeded with a ton of faces that were scraped from the big social media sites, and builds from there through user uploads. Take a photo of a person, see all their matches in photos, social media sites, etc. etc.

It’s clearly a surveillance machine, and just as clearly doesn’t bother with things like consent.

Maybe unsurprisingly, the main customer base is law enforcement, even though Clearview is entirely proprietary (i.e. no oversight of how it works), unstudied (there’s been no meaningful research to examine essential things like how many false positives the service generates) and quite possibly illegal (no consent from most people in that database).

The thing is: It’s now pretty simple to build something like this. So we’ll see many more just like it, unless we do something about it.

In other areas, as a society we recognized a disproportionate risk and built regulatory and enforcement mechanisms to prevent or at least manage those risks. Sometimes this works better than at other times, but this is how we got to nuclear disarmament, for example. Not a perfect system for sure, but it’s been kinda-sorta working. And kinda-sorta is sometimes really all you can hope for. At least we haven’t seen any rogue nuclear warheads explode in all the years since 1945 — so that’s a success.

Now, facial recognition isn’t as immediately deadly as a nuclear bomb. But it’s very, very bad at scale. The potential for abuse is consistently so big that we might as well ban it outright. And where the immediate danger is lower than that from a nuclear bomb, the barrier to entry is just that much lower: The tooling and the knowledge is there, the training data is there, it’s near-trivial to launch this type of product now. So we have to expect this to be a constant pretty bad threat any day for the rest of our lives.

So what mechanisms do we have to mitigate those risks? I argue we need an outright ban, also and especially in security contexts. GDPR and its mechanisms for requiring consent point in the right direction. The EU’s AI strategy (as per the recently leaked documents) consider such a ban, but with a (silly, really) exception for use in security contexts.

But let’s look at this and consider the real implications of facial recognition as a category: Otherwise, we’ll be playing whack-a-mole for decades.

Trust Indicators for Emerging Technologies

T

For the Trustable Technology Mark, we identified 5 dimensions that indicate trustworthiness. Let’s call them trust indicators:

  • Privacy & Data Practices: Does it respect users’ privacy and protect their data rights?
  • Transparency: Is it clear to users what the device and the underlying services do and are capable of doing?
  • Security: Is the device secure and safe to use? Are there safeguards against data leaks and the like?
  • Stability: How long a life cycle can users expect from the device, and how robust are the underlying services? Will it continue to work if the company gets acquired, goes belly-up, or stops maintenance?
  • Openness: Is it built on open source or around open data, and/or contributes to open source or open data? (Note: We treat Openness not as a requirement for consumer IoT but as an enabler of trustworthiness.)

Now these 5 trust indicators—and the questions we use in the Trustable Technology Mark to assess them—are designed for the context of consumer products. Think smart home devices, fitness trackers, connected speakers or light bulbs. They work pretty well for that context.

Over the last few months, it has become clear that there’s demand for similar trust indicators for areas other than consumer products like smart cities, artificial intelligence, and other areas of emerging technology.

I’ve been invited to a number of workshops and meetings exploring those areas, often in the context of policy making. So I want to share some early thoughts on how we might be able to translate these trust indicators from a consumer product context to these other areas. Please note that the devil is in the detail: This is early stage thinking, and the real work begins at the stage where the assessment questions and mechanisms are defined.

The main difference between consumer context and publicly deployed technology—infrastructure!—means that we need to focus even most strongly on safeguards, inclusion, and resilience. If consumer goods stop working, there’s real damage, like lost income and the like, but in the bigger picture, failing consumer goods are mostly a quality of life issue; and in the case of consumer IoT space, mostly for the affluent. (Meaning that if we’re talking about failure to operate rather than data leaks, the damage has a high likelihood of being relatively harmless.)

For publicly deployed infrastructure, we are looking at a very different picture with vastly different threat models and potential damage. Infrastructure that not everybody can rely on—equally, and all the time—would not just be annoying, it might be critical.

After dozens of conversations with people in this space, and based on the research I’ve been doing both for the Trustable Technology Mark and my other work with both ThingsCon and The Waving Cat, here’s a snapshot of my current thinking. This is explicitly intended to start a debate that can inform policy decisions for a wide range of areas where emerging technologies might play a role:

  • Privacy & Data Practices: Privacy and good data protection practices are as essential in public space as in the consumer space, even though the implications and tradeoffs might be different ones.
  • Transparency & Accountability: Transparency is maybe even more relevant in this context, and I propose adding Accountability as an equally important aspect. This holds especially true where commercial enterprises install and possibly maintain large scale networked public infrastructure, like in the context of smart cities.
  • Security: Just as important, if not more so.
  • Resilience: Especially for smart cities (but I imagine the same holds true for other areas), we should optimize for Resilience. Smart city systems need to work, even if parts fail. Decentralization, openness, interoperability and participatory processes are all strategies that can increase Resilience.
  • Openness: Unlike in the consumer space, I consider openness (open source, open data, open access) essential in networked public infrastructure—especially smart city technology. This is also a foundational building block for civic tech initiatives to be effective.

There are inherent conflicts and tradeoffs between these trust indicators. But **if we take them as guiding principles to discuss concrete issues in their real contexts, I believe they can be a solid starting point. **

I’ll keep thinking about this, and might adjust this over time. In the meantime, I’m keen to hear what you think. If you have thoughts to share, drop me a line or hit me up on Twitter.

Monthnotes for March 2018

M

Before we’re headed into the long Easter Holiday weekend, a quick rundown of what happened in March.

Mozilla Fellowship & an open trustmark for IoT

I’m happy to share that I’ve joined the Mozilla Fellows program (concretely, the IoT fellows group to work with Jon Rogers and Julia Kloiber), and that Mozilla supports the development of an open trustmark for IoT under the ThingsCon umbrella.

There’s no doubt going to be a more formal announcement soon, but here’s the shortest of blog posts over on ThingsCon.

(As always, a full disclosure: My partner works for Mozilla.)

I had already shared first thoughts on the IoT trustmark. We’ll have a lot more to share on the development of the trustmark now that it’s becoming more official. You can follow along here and over on the ThingsCon blog.

By the way, this needs a catchy name. Hit me up if you have one in mind we could use!

Zephyr interviews: The Craftsman, Deutsche Welle

We were humbled and delighted that Gianfranco Chicco covered Zephyr Berlin in the recent issue of his most excellent newsletter, The Craftsman. Links and some background here.

We also had an interview with Deutsche Welle. We’ll share it once it’s available online.

It’s great that this little passion project of ours is getting this attention, and truly humbled also by the super high quality feedback and engagement from our customers. What a lovely crowd! ?

Learning about Machine Learning

I’ve started Andrew Ng’s Machine Learning Stanford course on Coursera. Due to time constraints it’s slow going for me, and as expected, it’s a bit math heavy for my personal taste but even if you don’t aim to necessarily implement any machine learning or code to that effect there’s a lot to take away. Two thumbs up.

Notes from a couple of events on responsible tech

Aspen Institute: I was kindly invited to an event by Aspen Institute Germany about the impact of AI on society and humanity. One panel stood out to me: It was about AI in the context of autonomous weapons systems. I was positively surprised to hear that

  1. All panelists agreed that if autonomous weapons systems, then only with humans in the loop.
  2. There haven’t been significant cases of rogue actors deploying autonomous weapons, which strikes me as good to hear but also very surprising.
  3. A researcher from the Bundeswehr University Munich pointed out that introducing autonomous systems introduces instability, pointing out the possibility of flash wars triggered by fully autonomous systems interacting with one another (like flash crashes in stock markets).
  4. In the backend of military logistics, machine learning appears to already be a big deal.

Digital Asia Hub & HiiG: Malavika Jayaram kindly invited me to a small workshop with Digital Asia Hub and the Humboldt Institute for Internet and Society (in the German original abbreviated as HiiG). It was part of a fact finding trip to various regions and tech ecosystems to figure out which items are most important from a regulatory and policy perspective, and to feed the findings from these workshops into policy conversations in the APAC region. This was super interesting, especially because of the global input. I was particularly fascinated to see that Berlin hosts all kinds of tech ethics folks, some of which I knew and some of which I didn’t, so that’s cool.

Both are also covered in my newsletter, so I won’t just replicate everything here. You can dig into the archives from the last few weeks.

Thinking & writing

Season 3 of my somewhat more irreverent newsletter, Connection Problem, is coming up on its 20th issue. You can sign up here to see where my head is these days.

If you’d like to work with me in the upcoming months, I have very limited availability but happy to have a chat.

That’s it for today. Have a great Easter weekend and an excellent April!

The key challenge for the industry in the next 5 years is consumer trust

T

Note: Every quarter or so I write our client newsletter. This time it touched on some aspects I figured might be useful to this larger audience, too, so I trust you’ll forgive me cross-posting this bit from the most recent newsletter.

Some questions I’ve been pondering and that we’ve been exploring in conversations with our peer group day in, day out.

This isn’t an exhaustive list, of course, but gives you a hint about my headspace?—?experience shows that this can serve as a solid early warning system for industry wide debates, too. Questions we’ve had on our collective minds:

1. What’s the relationship between (digital) technology and ethics/sustainability? There’s a major shift happening here, among consumers and industry, but I’m not yet 100% sure where we’ll end up. That’s a good thing, and makes for interesting questions. Excellent!

2. The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust. Between all the insecurities and data leaks and bricked devices and “sunsetted” services and horror stories about hacked toys and routers and cameras and vibrators and what have you, I’m 100% convinced that consumer trust?—?and products’ trustworthiness?—?is the key to success for the next 5 years of IoT. (We’ve been doing lots of work in that space, and hope to continue to work on this in 2018.)

3. Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting? It seems safe to assume that as deploying machine learning gets easier and cheaper every day we’ll see AI-like techniques thrown at every imaginable niche. Remember when everyone and their uncle had to have an app? It’s going to be like that but with AI. This is going to be interesting, and no doubt it’ll produce spectacular successes as well as fascinating failures.

4. What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go?

These are all interesting, deep topics to dig into. They’re all closely interrelated, too, and have implications on business, strategy, research, policy. We’ll continue to dig in.

But also, besides these larger, more complex questions there are smaller, more concrete things to explore:

  • What are new emerging technologies? Where are exciting new opportunities?
  • What will happen due to more ubiquitous autonomous vehicles, solar power, crypto currencies? What about LIDAR and Li-Fi?
  • How will the industry adapt to the European GDPR? Who will be the first players to turn data protection and scarcity into a strength, and score major wins? I’m convinced that going forward, consumer and data protection offer tremendous business opportunities.

If these themes resonate, or if you’re asking yourself “how can we get ahead in 2018 without compromising user rights”, let’s chat.

Want to work together? I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.

PS: I write another newsletter, too, in which I share regular project updates, thoughts on the most interesting articles I come across, and where I explore areas around tech, society, culture & business that I find relevant. To watch my thinking unfolding and maturing, this is for you. You can subscribe here.

Focus areas over time

F

The end of the year is a good time to look back and take stock, and one of the things I’ve been looking at especially is how the focus of my work has been shifting over the years.

I’ve been using the term emerging technologies to describe where my interests and expertise are, because it describes clearly that the concrete focus is (by definition!) constantly evolving. Frequently, the patterns become obvious only in hindsight. Here’s how I would describe the areas I focused on primarily over the last decade or so:

focus areas over time Focus areas over time (Image: The Waving Cat)

Now this isn’t a super accurate depiction, but it gives a solid idea. I expect the Internet of Things to remain a priority for the coming years, but it’s also obvious that algorithmic decision-making and its impact (labeled here as artificial intelligence) is gaining importance, and quickly. The lines are blurry to begin with.

It’s worth noting that these timelines aren’t absolutes, either: I’ve done work around the implications of social media later than that, and work on algorithms and data long before. These labels indicated priorities and focus more than anything.

So anyway, hope this is helpful to understand my work. As always, if you’d like to bounce ideas feel free to ping me.