Categorywork

New Series: Corona Crisis — Lessons for the Future of Cities

N

I’m excited to pull back the curtain on a brand new project that I’m doing with Körber Stiftung, specifically with their democracy program.

We’ve started producing a series of video conversations called Corona Crisis — Lessons for the Future of Cities which is about… you guessed it: How different cities respond to the coronavirus crisis.

We’ll talk to local leaders about their specific, local challenges, opportunities and strategies: While some strategies are universal, like washing hands and keeping a safe distance, others are more tailored to the local context. And those strategies are what we want to learn about, so others can learn from them, too.

And while overall we’re pretty broad with our interest in this, there are two focus areas that we’ll be emphasizing: The use of digital tools (and the trade-offs that inform the decisions around them) as well as how engagement with civil society works.

While every city and administration has to find their own way, I’m convinced there are lessons to be learned from others. Over time, a picture should emerge: Patterns of approaches that seem more promising than others, best practices, and maybe some surprising insights.

I expect we’ll start sharing the videos within a week or two. When we do, I’ll update this post to include links.

I hope you’ll enjoy the show. Let me know what you think!

News: Teaching at Darmstadt once more

N

Just the quickest of updates: I’m super happy that I’ll be back at University of Applied Sciences Darmstadt to teach another class there, a block seminar on tech ethics and trustable technology.

At Prof. Andrea Krajewski‘s kind invitation I’d done a similar thing last year and am very much looking forward to doing it again — and this year, we get to experiment with the format, so it’s going to be video-based. Should be good!

New Report: “Towards a European AI & Society Ecosystem”

N

I’m happy to share that a new report I had the joy and privilege to co-author with Leonie Beining and Stefan Heumann (both of Stiftung Neue Verantwortung) just came out. It’s titled:

“Towards a European AI & Society Ecosystem”

I’m including the executive summary below, you can find the full report here. The report is co-produced by Stiftung Neue Verantwortung and ThingsCon.

Here’s the executive summary:

Artificial Intelligence (AI) has emerged as a key technology that has gripped the attention of governments around the globe. The European Commission has made AI leadership a top priority. While seeking to strengthen research and commercial deployment of AI, Europe has also embraced the role of a global regulator of technology, and is currently the only region where a regulatory agenda on AI rooted in democratic values – as opposed than purely market or strategic terms – can be credibly formulated. And given the size of the EU’s internal market, this can be done with a reasonable potential for global impact. However, there is a gap between Europe’s lofty ambitions and its actual institutional capacity for research, analysis and policy development to define and shape the European way on AI guided by societal values and the public interest. Currently the debate is mostly driven by industry, where most resources and capacity for technical research are located. European civil society organizations that study and address the social, political and ethical challenges of AI are not sufficiently consulted and struggle to have an impact on the policy debate. Thus, the EU’s regulatory ambition faces a serious problem: If Europe puts societal interests and values at the center of its approach towards AI, it requires robust engagement and relationships between governments and many diverse actors from civil society. Otherwise any claims regarding human-centric and trustworthy AI would come to nothing.

Therefore, EU policy-making capacity must be supported by a broader ecosystem of stakeholders and experts especially from civil society. This AI & Society Ecosystem, a subset of a broader AI Ecosystem that also includes industry actors, is essential in informing policy-making on AI, as well as holding the government to its self-proclaimed standard of promoting AI in the interest of society at large. We propose the ecosystem perspective, originating from biology and already applied in management and innovation studies (also with regard to AI). It captures the need for diversity of actors and expertise, directs the attention to synergies and connections, and puts the focus on the capacity to produce good outcomes over time. We argue that such a holistic perspective is urgently needed if the EU wants to fulfil its ambitions regarding trustworthy AI. The report aims to draw attention to the role of government actors and foundations in strengthening the AI & Society Ecosystem.

The report identifies ten core functions, or areas of expertise, that an AI & Society Ecosystem needs to be able to perform – ten areas of expertise where the ecosystem can contribute meaningfully to the policy debate: Policy, technology, investigation, and watchdog expertise; Expertise in strategic litigation, and in building public interest use cases of AI; Campaign and outreach, and research expertise; Expertise in promoting AI literacy and education; and sector-specific expertise. In a fully flourishing ecosystem these functions need to be connected in order to complement each other and benefit from each other.

The core ingredients needed for a strong AI & Society Ecosystem already exist: Europe can build on strengths like a strong tradition of civil society expertise and advocacy, and has a diverse field of digital rights organizations that are building AI expertise. It has strong public research institutions and academia, and a diverse media system that can engage a wider public in a debate around AI. Furthermore, policy-makers have started to acknowledge the role of civil society for the development of AI, and we see new funding opportunities from foundations and governments that prioritize the intersection of AI and society.

There are also clear weaknesses and challenges that the Ecosystem has to overcome: Many organizations lack the resources to build the necessary capacity, and there is little access to independent funding. Fragmentation across Europe lowers the visibility and impact of individual actors. We see a lack of coordination between civil society organizations weakening the the AI & Society Ecosystem as a whole. In policy-making there is a lack of real multi-stakeholder engagement and civil society actors often do not have sufficient access to the relevant processes. Furthermore, the lack of transparency on where and how AI systems are being used put additional burden on civil society actors engaging in independent research, policy and advocacy work.

Governments and foundations play a strong role for the development of a strong and impactful AI & Society Ecosystem in Europe. They provide not only important sources of funding on which AI & Society organizations depend. They are also themselves important actors within that ecosystem, and hence have other types of non-monetary support to offer. Policy-makers can, for example, lower barriers to participation and engagement for civil society. They can also create new resources for civil society, e.g. by encouraging NGOs to participate in government funded research or by designing grants especially with small organizations in mind. Foundations shape the ecosystem through broader support including aspects such as providing training and professional development. Furthermore, foundations are in the position to act as convener and to build bridges between different actors that are necessary in a healthy ecosystem. They are also needed to fill funding gaps for functions within the ecosystem, especially where government funding is hard or impossible to obtain. Overall, in order to strengthen the ecosystem, two approaches come into focus: managing relationships and managing resources.

Introducing the Berlin Institute for Smart Cities and Civil Rights

I

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:11.

Berlin Institute header over a backdrop of the Berlin skyline with the TV tower front and center

After working in this space for years, I’m convinced that smart cities are a key battleground in the fight for civil rights in the 21st century. I don’t say this lightly, either: I truly believe that smart cities — cities infused with data-driven infrastructure — are focal points where a range of technologies and issues manifest very concretely.

Why we need the Berlin Institute
Cities around the globe are exploring smart city initiatives in order to deliver better services to their citizens, and to manage resources most efficiently. However, city governments operate within a network of competing pressures and pain points. The Berlin Institute aims to help relieve those pressures and pain points by providing tools and expertise that put citizens and their rights front and center.

Together with my collaborator, former German human rights commissioner Markus Löning and his extremely capable team, we are setting out to realize the positive potential of smart cities while avoiding potential harms — by putting civil rights first.

With our combined expertise — Markus around human rights, mine around smart cities — we hope that we can make a valuable contribution to the smart city debate.

So today, as a soft launch, I’m happy to point towards our new website for the Berlin Institute for Smart Cities and Civil Rights (BISC). Lots there is still going to change as we keep refining. In the meantime, I’d love to learn more about how we can best help you and your city navigate this space.

Slashing extra curriculars

S

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:09.

For years now — really, throughout my whole career — I’ve always had a whole range of side projects: Sometimes prototypes or experimental explorations, sometimes something akin to professional hobbies.

It’s fair to say that to a degree, I’ve kind of built my career from or around those extra curricular activities. The things I learned through these, the people I’ve had the opportunity to meet — this has been priceless. And mostly a joy, too!

Over the years, many things fell into this category. Some stayed, others emerged and evolved into more serious projects. To name just a few that come to mind:

  • Events, like Atoms&Bits Festival, Cognitive Cities Conference, TEDxKreuzberg, UIKonf, Ignite Berlin, and ThingsCon.
  • Written things, like The Indie Conference Organizer Handbook (with Max Krüger), Understanding the Connected Home (with Michelle Thorne), and View Source: Shenzhen.
  • Research-y things, like The Good Home.
  • Product-ish things, like Zephyr Berlin pants, The Alpine Review, and the Dearsouvenir magazine.

Some of these, I did by myself; others I got involved in after the fact. But really, most were true collaborations that I co-founded, and those are what I tend to enjoy the most. Over the years, I’ve also been on juries and in a number of unpaid advisory situations.

Together, that’s about a decade worth of extra curriculars: Yes, that was my version of the infamous 20% projects, though in reality, probably a much bigger chunk of my time.

I’ve enjoyed every minute of those, and benefitted greatly from most. Some of them are still thriving, with or without my involvement, which makes me very happy indeed.

But every now and then, it’s time for a culling because a day’s only that long. And especially now with a kid that takes up all slack really I prefer spending time with over yet another Slack channel, that time has come. (However rewarding those Slack channels and side projects might be, they don’t hold a candle to this kid of ours, obviously.)

So it’s with a tear and a smile that I’ll be stepping away from a few more extra curriculars. I’ll be stepping back from being involved in Ignite Berlin, our local chapter of Ignite lightning talks (the project had been sitting there idly for a while anyway by now). I’ll be stepping down from my jury duties. Probably more has to go, but what and how exactly I’m still trying to figure out.

Time to reclaim some slack in the system, lest it all break down.

Oh right, I should add that ThingsCon is explicitly not part of that culling: I’ll be as involved in ThingsCon as ever for the foreseeable future.

So onward and upward, light as a feather.

Trustmarks, trustmarks, trustmarks

T

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:08.

A couple of years ago, with ThingsCon and support from Mozilla, we launched a trustmark for IoT: The Trustable Technology Mark.

While launching and growing the Trustable Technology Mark hasn’t been easy and we’re currently reviewing our setup, we learned a lot during the research and implementation phase. So occasionally, others will ping us for some input on their own research journey. And since we learned what we learned, to a large degree, from others who generously shared their insights and time with us while we did our own initial research (Alex, Laura, JP: You’re all my heroes!), we’re happy to share what we’ve learned, too. After all, we all want the same thing: Technology that’s responsibly made and respects our rights.

So I’m delighted to see that one of those inputs we had the opportunity to give led to an excellent report on trustmarks for digital technology published by NGI Forward: Digital Trustmarks (PDF).

It’s summarized well on Nesta’s website, too: A trustmark for the internet?

The report gives a comprehensive look at why a trustmark for digital technology is very much needed, where the challenges and opportunities lie, and it offers pathways worth exploring.

Special thanks to author Hessy Elliott for the generous acknowledgements, too.

Monthnotes for January 2020

M

January was mostly for writing, and some scheming for projects yet-to-be-unveiled.

ONGOING WORK

On a lark, I switched back to using project code names. Autonomous Antelope is in the final writing stage. Bamboozling Badger is about to be published. Colorful Caribou still needs a bit of polishing, and for Eerie Eraser, we’re drafting a concept to prototype and test soon.

WHAT’S NEXT?

Lots of writing again this month, then maybe a break, then back into the fray. Looks like in April or so, there’ll be capacity to take on new projects — let’s discuss soon.