Tagpolicy

New Report: “Towards a European AI & Society Ecosystem”

N

I’m happy to share that a new report I had the joy and privilege to co-author with Leonie Beining and Stefan Heumann (both of Stiftung Neue Verantwortung) just came out. It’s titled:

“Towards a European AI & Society Ecosystem”

I’m including the executive summary below, you can find the full report here. The report is co-produced by Stiftung Neue Verantwortung and ThingsCon.

Here’s the executive summary:

Artificial Intelligence (AI) has emerged as a key technology that has gripped the attention of governments around the globe. The European Commission has made AI leadership a top priority. While seeking to strengthen research and commercial deployment of AI, Europe has also embraced the role of a global regulator of technology, and is currently the only region where a regulatory agenda on AI rooted in democratic values – as opposed than purely market or strategic terms – can be credibly formulated. And given the size of the EU’s internal market, this can be done with a reasonable potential for global impact. However, there is a gap between Europe’s lofty ambitions and its actual institutional capacity for research, analysis and policy development to define and shape the European way on AI guided by societal values and the public interest. Currently the debate is mostly driven by industry, where most resources and capacity for technical research are located. European civil society organizations that study and address the social, political and ethical challenges of AI are not sufficiently consulted and struggle to have an impact on the policy debate. Thus, the EU’s regulatory ambition faces a serious problem: If Europe puts societal interests and values at the center of its approach towards AI, it requires robust engagement and relationships between governments and many diverse actors from civil society. Otherwise any claims regarding human-centric and trustworthy AI would come to nothing.

Therefore, EU policy-making capacity must be supported by a broader ecosystem of stakeholders and experts especially from civil society. This AI & Society Ecosystem, a subset of a broader AI Ecosystem that also includes industry actors, is essential in informing policy-making on AI, as well as holding the government to its self-proclaimed standard of promoting AI in the interest of society at large. We propose the ecosystem perspective, originating from biology and already applied in management and innovation studies (also with regard to AI). It captures the need for diversity of actors and expertise, directs the attention to synergies and connections, and puts the focus on the capacity to produce good outcomes over time. We argue that such a holistic perspective is urgently needed if the EU wants to fulfil its ambitions regarding trustworthy AI. The report aims to draw attention to the role of government actors and foundations in strengthening the AI & Society Ecosystem.

The report identifies ten core functions, or areas of expertise, that an AI & Society Ecosystem needs to be able to perform – ten areas of expertise where the ecosystem can contribute meaningfully to the policy debate: Policy, technology, investigation, and watchdog expertise; Expertise in strategic litigation, and in building public interest use cases of AI; Campaign and outreach, and research expertise; Expertise in promoting AI literacy and education; and sector-specific expertise. In a fully flourishing ecosystem these functions need to be connected in order to complement each other and benefit from each other.

The core ingredients needed for a strong AI & Society Ecosystem already exist: Europe can build on strengths like a strong tradition of civil society expertise and advocacy, and has a diverse field of digital rights organizations that are building AI expertise. It has strong public research institutions and academia, and a diverse media system that can engage a wider public in a debate around AI. Furthermore, policy-makers have started to acknowledge the role of civil society for the development of AI, and we see new funding opportunities from foundations and governments that prioritize the intersection of AI and society.

There are also clear weaknesses and challenges that the Ecosystem has to overcome: Many organizations lack the resources to build the necessary capacity, and there is little access to independent funding. Fragmentation across Europe lowers the visibility and impact of individual actors. We see a lack of coordination between civil society organizations weakening the the AI & Society Ecosystem as a whole. In policy-making there is a lack of real multi-stakeholder engagement and civil society actors often do not have sufficient access to the relevant processes. Furthermore, the lack of transparency on where and how AI systems are being used put additional burden on civil society actors engaging in independent research, policy and advocacy work.

Governments and foundations play a strong role for the development of a strong and impactful AI & Society Ecosystem in Europe. They provide not only important sources of funding on which AI & Society organizations depend. They are also themselves important actors within that ecosystem, and hence have other types of non-monetary support to offer. Policy-makers can, for example, lower barriers to participation and engagement for civil society. They can also create new resources for civil society, e.g. by encouraging NGOs to participate in government funded research or by designing grants especially with small organizations in mind. Foundations shape the ecosystem through broader support including aspects such as providing training and professional development. Furthermore, foundations are in the position to act as convener and to build bridges between different actors that are necessary in a healthy ecosystem. They are also needed to fill funding gaps for functions within the ecosystem, especially where government funding is hard or impossible to obtain. Overall, in order to strengthen the ecosystem, two approaches come into focus: managing relationships and managing resources.

New Report: “Smart Cities: A Key to a Progressive Europe”

N

I’m happy to share that a report is out today that I had the honor and pleasure to co-author. It’s published jointly by the Foundation for European Progressive Studies (FEPS) and the Cooperation Committee of the Nordic Labour Movement (SAMAK).

The report is called “A Progressive Approach to Digital Tech — Taking Charge of Europe’s Digital Future.”

In FEPS’s words:

This report tries to answer the question how progressives should look at digital technology, at a time when it permeates every aspect of our lives, societies and democracies. (…)
The main message: Europe can achieve a digital transition that is both just and sustainable, but this requires a positive vision and collective action.

At its heart, it’s an attempt to outline a progressive digital agenda for Europe. Not a defensive one, but one that outlines a constructive, desirable approach.

My focus was on smart cities and how a progressive smart city policy could look like. My contribution specifically comes in the form of a stand-alone attachment titled:

“Smart Cities: A Key to a Progressive Europe”

I’d love to hear what you think. For now, enjoy the report!

Europe’s AI Strategy: Give your input today

E

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:12.

The European Commission has just published a White Paper outlining their AI strategy.

Politico has what looks like a solid summary, but I’m still slowly working my way through the documents, so I can’t vouch for it. If you understand German, Netzpolitik has a good one, too.

You can find the overview documents and — just as importantly, if not more so — the so-called consultation here. This is where you can give input and feedback. Please do so!

So far, this White Paper is just a basis for discussion. The real lobbying starts now that it’s published. So any counterweight to industry lobbying is probably healthy. (In fact, in a report out in a few days, we argue that a strong civil society involvement in the AI space is key to a desirable future for AI in Europe; check back on this blog for updates on that paper later this week.)

Since EU surveys are notoriously not the least bit appealing at all, not enough people and organizations give their input here — and hence they yield the floor to those others who might not have their best intentions, or simply have more time.

This consultation here is pretty straightforward, though, and you can reply in a mix of multiple choice, free text input, and/or upload a PDF with your organization’s positions. So really, please take the time.

You’ll need a Europa account to do so, and if you don’t have one, set one up, it’ll come in handy in other places, too. (For example, if you ever apply for EU grants.)

Please note that the consultation survey does, for whatever reason, not pop up when you click on the navigation where it says “Consultations” but when you click “Read full text” on the left:

It’s confusing as hell, so here’s a screenshot to highlight what’s going on. I don’t know what CMS the European Commission uses but it seems like it produces some interesting artifacts.

I’ll try to post some thoughts on the strategy itself soon, too. But in the meantime, go have a look.

Introducing the Berlin Institute for Smart Cities and Civil Rights

I

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:11.

Berlin Institute header over a backdrop of the Berlin skyline with the TV tower front and center

After working in this space for years, I’m convinced that smart cities are a key battleground in the fight for civil rights in the 21st century. I don’t say this lightly, either: I truly believe that smart cities — cities infused with data-driven infrastructure — are focal points where a range of technologies and issues manifest very concretely.

Why we need the Berlin Institute
Cities around the globe are exploring smart city initiatives in order to deliver better services to their citizens, and to manage resources most efficiently. However, city governments operate within a network of competing pressures and pain points. The Berlin Institute aims to help relieve those pressures and pain points by providing tools and expertise that put citizens and their rights front and center.

Together with my collaborator, former German human rights commissioner Markus Löning and his extremely capable team, we are setting out to realize the positive potential of smart cities while avoiding potential harms — by putting civil rights first.

With our combined expertise — Markus around human rights, mine around smart cities — we hope that we can make a valuable contribution to the smart city debate.

So today, as a soft launch, I’m happy to point towards our new website for the Berlin Institute for Smart Cities and Civil Rights (BISC). Lots there is still going to change as we keep refining. In the meantime, I’d love to learn more about how we can best help you and your city navigate this space.

Nuclear disarmament but for surveillance tech?

N

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:03.

Surely, the name Clearview AI has crossed your radar these last couple of weeks since the NYTimes has published this profile. In a nutshell, Clearview is a small company that has built a facial recognition system that is seeded with a ton of faces that were scraped from the big social media sites, and builds from there through user uploads. Take a photo of a person, see all their matches in photos, social media sites, etc. etc.

It’s clearly a surveillance machine, and just as clearly doesn’t bother with things like consent.

Maybe unsurprisingly, the main customer base is law enforcement, even though Clearview is entirely proprietary (i.e. no oversight of how it works), unstudied (there’s been no meaningful research to examine essential things like how many false positives the service generates) and quite possibly illegal (no consent from most people in that database).

The thing is: It’s now pretty simple to build something like this. So we’ll see many more just like it, unless we do something about it.

In other areas, as a society we recognized a disproportionate risk and built regulatory and enforcement mechanisms to prevent or at least manage those risks. Sometimes this works better than at other times, but this is how we got to nuclear disarmament, for example. Not a perfect system for sure, but it’s been kinda-sorta working. And kinda-sorta is sometimes really all you can hope for. At least we haven’t seen any rogue nuclear warheads explode in all the years since 1945 — so that’s a success.

Now, facial recognition isn’t as immediately deadly as a nuclear bomb. But it’s very, very bad at scale. The potential for abuse is consistently so big that we might as well ban it outright. And where the immediate danger is lower than that from a nuclear bomb, the barrier to entry is just that much lower: The tooling and the knowledge is there, the training data is there, it’s near-trivial to launch this type of product now. So we have to expect this to be a constant pretty bad threat any day for the rest of our lives.

So what mechanisms do we have to mitigate those risks? I argue we need an outright ban, also and especially in security contexts. GDPR and its mechanisms for requiring consent point in the right direction. The EU’s AI strategy (as per the recently leaked documents) consider such a ban, but with a (silly, really) exception for use in security contexts.

But let’s look at this and consider the real implications of facial recognition as a category: Otherwise, we’ll be playing whack-a-mole for decades.

Just enough City

J

In this piece, I’m advocating for a Smart City model based on restraint, and focused first and foremost on citizen needs and rights.

A little while ago, the ever-brilliant and eloquent Rachel Coldicutt wrote a piece on the role of public service internet, and why it should be a model of restraint. It’s titled Just enough Internet, and it resonated deeply with me. It was her article that inspired not just this piece’s title but also its theme: Thank you, Rachel!

Rachel argues that public service internet (broadcasters, government services) shouldn’t compete with commercial competitors by commercial metrics, but rather use approaches better suited to their mandate: Not engagement and more data, but providing the important basics while collecting as little as possible. (This summary doesn’t do Rachel’s text justice, she makes more, and more nuanced points there, so please read her piece, it’s time well spent.)

I’ll argue that Smart Cities, too, should use an approach better suited to their mandate—an approach based on (data) restraint, and on citizens’ needs & rights.

This restraint and reframing is important because it prevents mission creep; it also alleviates the carbon footprint of all those services.

Enter the Smart City

Wherever we look on the globe, we see so-called Smart City projects popping up. Some are incremental, and add just some sensors. Others are blank slate, building whole connected cities or neighborhoods from scratch. What they have in commons is that they mostly are built around a logic of data-driven management and optimization. You can’t manage what you can’t measure, management consultant Peter Drucker famously said, and so Smart Cities tend to measure… everything. Or so they try.

Of course, sensors only measure so many things, like physical movement (of people, or goods, or vehicles) through space, or the consumption and creation of energy. But thriving urban life is made up of many more things, and many of those cannot be measured as easily: Try measuring opportunity or intention or quality of life, and most Smart City management dashboards will throw an error: File not found.

The narrative of the Smart City is based fundamentally that of optimizing a machine to run as efficiently as possible. It’s neoliberal market thinking in its purest form. (Greenfield and Townsend and Morozov and many other Smart City critics have made those points much more eloquently before.) But that doesn’t reflect urban life. The human side of it is missing, a glaring hole right in the center of that particular vision.

Instead of putting citizens in that spot in the center, the “traditional” Smart City model aims to build better (meaning: more efficient, lower cost) services to citizens by collecting, collating, analyzing data. It’s the logic of global supply chains and predictive maintenance and telecommunications networks and data analytics applied to the public space. (It’s no coincidence of the large tech vendors in that space come from either one of those backgrounds.)

The city, however, is no machine to be run at maximum efficiency. It’s a messy agora, with competing and often conflicting interests, and it needs slack in the system: Slack and friction all increase resilience in the face of larger challenges, as do empowered citizens and municipal administrations. The last thing any city needs is to be fully algorithmically managed at maximum efficiency just to come to a grinding halt when — not if! — the first technical glitch happens, or some company ceases their business.

Most importantly, I’m convinced that depending on context, collecting data in public space can be a fundamental risk to a free society—and that it’s made even worse if the data collection regime is outside of the public’s control.

The option of anonymity plays a crucial role for the freedom of assembly, of organizing, of expressing thoughts and political speech. If sensitive data is collected in public space (even if it’s not necessarily personably identifiable information!) then the trust in the collecting entity needs to be absolute. But as we know from political science, the good king is just another straw man, and that given the circumstance even the best government can turn bad quickly. History has taught us the crucial importance of checks & balances, and of data avoidance.

We need a Smart City model of restraint

Discussing publicly owned media, Rachel argues:

It’s time to renegotiate the tacit agreement between the people, the market and the state to take account of the ways that data and technology have nudged behaviours and norms and changed the underlying infrastructure of everyday life.

This holds true for the (Smart) City, too: The tacit agreement between the people, the market and the state is that, roughly stated, the government provides essential services to its citizens, often with the help of the market, and with the citizens’ interest at the core. However, when we see technology companies lobby governments to green-light data-collecting pilot projects with little accountability in public space, that tacit agreement is violated. Not the citizens’ interests but those multinationals’ business models move into the center of these considerations.

There is no opt-out in public space. So when collecting meaningful consent to the collection of data about citizens is hard or impossible, that data must not be collected, period. Surveillance in public space is more often detrimental to free societies than not. You know this! We all know this!

Less data collected, and more options of anonymity in public space, make for a more resilient public sphere. And what data is collected should be made available to the public at little or no cost, and to commercial interests only within a framework of ethical use (and probably for a fee).

What are better metrics for living in a (Smart) City?

In order to get to better Smart Cities, we need to think about better, more complete metrics than efficiency & cost savings, and we need to determine those (and all other big decisions about public space) through a strong commitment to participation: From external experts to citizens panels to digital participation platforms, there are many tools at our disposal to make better, more democratically legitimized decisions.

In that sense I cannot offer a final set of metrics to use. However, I can offer some potential starting points for a debate. I believe that every Smart City projects should be evaluated against the following aspects:

  • Would this substantially improve sustainability as laid out in the UN’s Sustainable Development Goals (SGD) framework?
  • Is meaningful participation built into the process at every step from framing to early feedback to planning to governance? Are the implications clear, and laid out in an accessible, non-jargony way?
  • Are there safeguards in place to prevent things from getting worse than before if something doesn’t work as planed?
  • Will it solve a real issue and improve the life of citizens? If in doubt, cut it out.
  • Will participation, accountability, resilience, trust and security (P.A.R.T.S.) all improve through this project?

Obviously those can only be starting points.

The point I’m making is this: In the Smart City, less is more.

City administrations should optimize for thriving urban live and democracy; for citizens and digital rights — which also happen to be human rights; for resilience and opportunity rather than efficiency. That way we can create a canvas to be painted by citizens, administration and — yes! — the market, too.

We can only manage what we can measure? Not necessarily. Neither the population or the urban organism need to be managed; just given a robust framework to thrive within. We don’t always need real time data for every decision — we can also make good decision based on values and trust in democratic processes, and by giving a voice to all impacted communities. We have a vast body of knowledge from decades of research around urban planning and sociology, and many other areas: Often enough we know the best decisions and it’s only politics that keeps us from enacting them.

We can change that, and build the best public space we know to build. Our cities will be better off for it.

About the author

Just for completeness’ sake so you can see where I’m coming from, I’m basing this on years of working at least occasionally on Smart City projects. My thinking is informed by work around emerging tech and its impact on society, and a strong focus on responsible technology that puts people first. Among other things I’ve co-founded ThingsCon, a non-profit community that promotes responsible tech, and led the development of the Trustable Technology Mark. I was a Mozilla Fellow in 2018-19 and am an Edgeryders Fellow in 2019-20. You can find my bio here.