I’m excited to pull back the curtain on a brand new project that I’m doing with Körber Stiftung, specifically with their democracy program.
We’ve started producing a series of video conversations called Corona Crisis — Lessons for the Future of Cities which is about… you guessed it: How different cities respond to the coronavirus crisis.
We’ll talk to local leaders about their specific, local challenges, opportunities and strategies: While some strategies are universal, like washing hands and keeping a safe distance, others are more tailored to the local context. And those strategies are what we want to learn about, so others can learn from them, too.
And while overall we’re pretty broad with our interest in this, there are two focus areas that we’ll be emphasizing: The use of digital tools (and the trade-offs that inform the decisions around them) as well as how engagement with civil society works.
While every city and administration has to find their own way, I’m convinced there are lessons to be learned from others. Over time, a picture should emerge: Patterns of approaches that seem more promising than others, best practices, and maybe some surprising insights.
I expect we’ll start sharing the videos within a week or two. When we do, I’ll update this post to include links.
I hope you’ll enjoy the show. Let me know what you think!
Artificial Intelligence (AI) has emerged as a key technology that has gripped the attention of governments around the globe. The European Commission has made AI leadership a top priority. While seeking to strengthen research and commercial deployment of AI, Europe has also embraced the role of a global regulator of technology, and is currently the only region where a regulatory agenda on AI rooted in democratic values – as opposed than purely market or strategic terms – can be credibly formulated. And given the size of the EU’s internal market, this can be done with a reasonable potential for global impact. However, there is a gap between Europe’s lofty ambitions and its actual institutional capacity for research, analysis and policy development to define and shape the European way on AI guided by societal values and the public interest. Currently the debate is mostly driven by industry, where most resources and capacity for technical research are located. European civil society organizations that study and address the social, political and ethical challenges of AI are not sufficiently consulted and struggle to have an impact on the policy debate. Thus, the EU’s regulatory ambition faces a serious problem: If Europe puts societal interests and values at the center of its approach towards AI, it requires robust engagement and relationships between governments and many diverse actors from civil society. Otherwise any claims regarding human-centric and trustworthy AI would come to nothing.
Therefore, EU policy-making capacity must be supported by a broader ecosystem of stakeholders and experts especially from civil society. This AI & Society Ecosystem, a subset of a broader AI Ecosystem that also includes industry actors, is essential in informing policy-making on AI, as well as holding the government to its self-proclaimed standard of promoting AI in the interest of society at large. We propose the ecosystem perspective, originating from biology and already applied in management and innovation studies (also with regard to AI). It captures the need for diversity of actors and expertise, directs the attention to synergies and connections, and puts the focus on the capacity to produce good outcomes over time. We argue that such a holistic perspective is urgently needed if the EU wants to fulfil its ambitions regarding trustworthy AI. The report aims to draw attention to the role of government actors and foundations in strengthening the AI & Society Ecosystem.
The report identifies ten core functions, or areas of expertise, that an AI & Society Ecosystem needs to be able to perform – ten areas of expertise where the ecosystem can contribute meaningfully to the policy debate: Policy, technology, investigation, and watchdog expertise; Expertise in strategic litigation, and in building public interest use cases of AI; Campaign and outreach, and research expertise; Expertise in promoting AI literacy and education; and sector-specific expertise. In a fully flourishing ecosystem these functions need to be connected in order to complement each other and benefit from each other.
The core ingredients needed for a strong AI & Society Ecosystem already exist: Europe can build on strengths like a strong tradition of civil society expertise and advocacy, and has a diverse field of digital rights organizations that are building AI expertise. It has strong public research institutions and academia, and a diverse media system that can engage a wider public in a debate around AI. Furthermore, policy-makers have started to acknowledge the role of civil society for the development of AI, and we see new funding opportunities from foundations and governments that prioritize the intersection of AI and society.
There are also clear weaknesses and challenges that the Ecosystem has to overcome: Many organizations lack the resources to build the necessary capacity, and there is little access to independent funding. Fragmentation across Europe lowers the visibility and impact of individual actors. We see a lack of coordination between civil society organizations weakening the the AI & Society Ecosystem as a whole. In policy-making there is a lack of real multi-stakeholder engagement and civil society actors often do not have sufficient access to the relevant processes. Furthermore, the lack of transparency on where and how AI systems are being used put additional burden on civil society actors engaging in independent research, policy and advocacy work.
Governments and foundations play a strong role for the development of a strong and impactful AI & Society Ecosystem in Europe. They provide not only important sources of funding on which AI & Society organizations depend. They are also themselves important actors within that ecosystem, and hence have other types of non-monetary support to offer. Policy-makers can, for example, lower barriers to participation and engagement for civil society. They can also create new resources for civil society, e.g. by encouraging NGOs to participate in government funded research or by designing grants especially with small organizations in mind. Foundations shape the ecosystem through broader support including aspects such as providing training and professional development. Furthermore, foundations are in the position to act as convener and to build bridges between different actors that are necessary in a healthy ecosystem. They are also needed to fill funding gaps for functions within the ecosystem, especially where government funding is hard or impossible to obtain. Overall, in order to strengthen the ecosystem, two approaches come into focus: managing relationships and managing resources.
Like I wrote in the last monthnotes, November to January are blocked out for research and writing. That said, there was a bit of other stuff going on outside heads-down writing, most notably the annual ThingsCon conference.
I also wrapped up my Edgeryders fellowship. I’m grateful for the opportunity to pursue independent research into how we can make smart cities work better for citizens.
I’ve submitted the final pieces of writing for a Brussels-based foundation. The final report should be out soon. This is roughly in the area of European digital agenda & smart city policy.
With a Berlin-based think tank, a research project is in the phase of final write-up of results and conclusions. This will likely take us well into January, then on to collect some more feedback on the final drafts. More updates when I have them. This is in the area of responsible AI development.
A more recent project around impact of smart cities on labor rights has kicked off in December. Lots of research and writing to do there well into January.
Earlier this year, Nadia E. kindly invited me to join Edgeryders (ER) as a fellow to do independent research as part of their Internet of Humans program. From June to December 2019 I was an ER fellow and had the opportunity to work with the lovely John Coate & Nadia, and the fantastic team and community there at ER.
On the ER platform, there’s a very active community of people who’re willing to invest time and energy in debate. It allowed me to gather a bunch of valuable feedback on early ideas and thoughts.
As part of my fellowship, I also had the opportunity to do a number of interviews. John interviewed me to kick things off on the Edgeryders platform, and I interviewed a few smart folks like Jon Rogers, Ester Fritsch, Marcel Schouwenaar and Michelle Thorne (disclosure: my partner), all of whom do interesting and highly relevant work around responsible emerging tech: In many ways, their work helps me frame my own thinking, and the way it’s adjacent to my work helps me find the boundaries of what I focus on. If this list seems like it’s a list of long-time collaborators, that’s no coincidence but by design: Part of how ER works is by integrating existing networks and amplifying them. So fellows are encouraged to bring in their existing networks.
Some of these interviews are online already, as are some reflections on them:
November to January have essentially been blocked out for research and writing.
My work with a Brussels-based foundation is in the final stages of editing. I expect the final report to be published within the month. Expect lots of smart city / digital agenda thinking there.
With a Berlin-based think tank, we’ve had another workshop on Europe’s capacity for developing ethical/responsible AI. This brings the research phase slowly to its end and now it’s time to synthesize and write up the findings. Probably to be published in January, or February at the latest.
A new project is ramping up with a global labor group to explore the intersection of smart cities and labor rights.
I was kindly invited to participate in Forum Offene Stadt, an event jointly hosted by Körber Stiftung and Open Knowledge Foundation. The event brings together civil society, government & administration, and the private sector. It’s good to see these gatherings as there’s still nowhere near enough exchange at those intersections, especially where topics around the impact of emerging technologies are discussed.
Kompakt Magazin (by IG BCE) did a longer interview with me to talk about technology and how we must ensure it doesn’t discriminate against people, and shouldn’t be treated as if it’s pre-determined. The interview is in German: Technologie darf Menschen nicht diskriminieren (online, e-paper).
Lots of research and writing this month and next. But I’ll be more than happy to take a quick break from it all to head on over to Rotterdam (12/13 Dec) for ThingsCon, my all time favorite community event. Learn more about ThingsCon here and join us!
I’ve been writing a newsletter for a few years now that I just rarely feature here, and usually just mention every now and then. At a recent conference, conversations with Ton Zylstra, Elmine Wijnia, Peter Rukavina and others all reminded me of the value of creating a more permanent archive that you host yourself (to a degree) rather than just relying on something as potentially impermanent as a commercial newsletter provided. (Ton blogged about it, too.) It is in that spirit that I’ll try for a bit to cross-post (most) of my newsletter here.
Please note that (for workflow and time saving reasons) this is a copy & paste of a pre-final draft; the final corrections and edits happen within Tinyletter, the email service. So there might be a few typos here that aren’t in the newsletter itself.
The preferred way to receive this (preferred by the author at least) is most certainly the newsletter, but here’s the archived version for those who prefer a different format. Also, take it as a sample/teaser. And if you think this is for you, why don’t you come along for the ride:
Ambient privacy & participation at the (smart) street level
“Sustainability always looks like underutilization when compared to resource extraction”
— Deb Chachra, Metafoundry
In Berlin, we’re coming off of the tail end of a massive heat wave with somewhere near 40C peak yesterday. A small stretch of forest burned on the city’s edge, a much larger one just south of the city. The latter included a former military training ground; ordnance was involved. There were warnings of strange smells wafting through the city. Stay calm, everyone. This is just the new normal.
Today’s pieces mostly run along the thread of privacy & how to make sure we can all have enough to see democracy thrive: From the macro level through the smart city lens down to the issue of microphones embedded in our kitchens. Enjoy!
Know someone who might enjoy this newsletter or benefit from it? Feel free to forward as you see fit, or send out a shout-out to tinyletter.com/pbihr. If you’d like to support my independent writing directly, the easiest way is to join the Brain Trust membership.
Starting a new fellowship. I mentioned if briefly before, but am happy to announce officially: Edgeryders invited me to be a fellow as part of their Internet of Humans program, exploring some questions around how to make smart cities work for citizens first and foremost (as opposed to vendors or administration first). I’m honored and grateful; this helps me dig deeper into these issues that — as you know well if you’re reading this — have been on the top of my mind for some time.
The network provides. For Zephyr Berlin, our apparel staples side project that we’ve been engaged in since 2016, I reached out to Twitter to see if anyone could hook me up with some recommendations/leads/pointers to learn more about how and where to produce something with merino wool in Europe. And lo and behold, we got so many excellent leads — thank you! (You know who you are.) I’m not sure what might come out of this, if anything, but I know it’s more than just fun to learn more and experiment with new ideas.
One of my favorite writers online — especially about travel and the internet industry — is the ever brilliant Maciej Ceg?owski, founder of Pinboard and Tech Solidarity and an outspoken tech critic from within, so to speak. He just wrote a long-ish piece on what he coins “ambient privacy”, i.e. the idea that our privacy is impacted not just by the things we actively choose to share through, for example, social media; but also by the environments we move through, from other people’s social media use to sensors and GPS and the internet watching us through surveillance ads and all that jazz. It’s essentially an inversion of our traditional perspective of privacy as a default to non-privacy as a default (not a desirable outcome one, to be sure!) — or the shift from individual data rights to a collective data rights in the words of Martine Tisné (linked a few times before).
If you read one thing today, make it this one, I urge you. I loved it so much, I kind of want to quote the whole thing. Instead, a few snippets as teasers more than anything (highlights mine):
“This requires us to talk about a different kind of privacy, one that we haven’t needed to give a name to before. For the purposes of this essay, I’ll call it ‘ambient privacy’—the understanding that there is value in having our everyday interactions with one another remain outside the reach of monitoring, and that the small details of our daily lives should pass by unremembered. What we do at home, work, church, school, or in our leisure time does not belong in a permanent record. Not every conversation needs to be a deposition. (…) Ambient privacy is not a property of people, or of their data, but of the world around us. Just like you can’t drop out of the oil economy by refusing to drive a car, you can’t opt out of the surveillance economy by forswearing technology (and for many people, that choice is not an option). While there may be worthy reasons to take your life off the grid, the infrastructure will go up around you whether you use it or not.”
“In the eyes of regulators, privacy still means what it did in the eighteenth century—protecting specific categories of personal data, or communications between individuals, from unauthorized disclosure. Third parties that are given access to our personal data have a duty to protect it, and to the extent that they discharge this duty, they are respecting our privacy. (…) The question we need to ask is not whether our data is safe, but why there is suddenly so much of it that needs protecting. The problem with the dragon, after all, is not its stockpile stewardship, but its appetite.”
“Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society.“ (…) “Telling people that they own their data, and should decide what to do with it, is just another way of disempowering them.”
“The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.”
“When all discussion takes place under the eye of software, in a for-profit medium working to shape the participants’ behavior, it may not be possible to create the consensus and shared sense of reality that is a prerequisite for self-government. If that is true, then the move away from ambient privacy will be an irreversible change, because it will remove our ability to function as a democracy.”
And, last but not least:
“Our discourse around privacy needs to expand to address foundational questions about the role of automation: To what extent is living in a surveillance-saturated world compatible with pluralism and democracy? What are the consequences of raising a generation of children whose every action feeds into a corporate database? What does it mean to be manipulated from an early age by machine learning algorithms that adaptively learn to shape our behavior?”
Ok, so I did end up quoting at great length. But really, I think it’s that good and important.
Your blender is listening
There was fun news — for some definition of fun! — coming out of France this week. A group of hackers discovered a connected blender had shipped with a microphone built in and with bad security practices. So this blender could be used to spy on very much unsuspecting buyers.
But let’s start at the beginning (also available on Twitter), because this is exactly the kind of irresponsible stuff that we at ThingsCon try to fight every day. Here’s the blender we’re talking about, on the right side:
[Image not embedded]
See the knobs on the blender? It’s a little hard to tell on the photo but these are virtual buttons, it’s a touch screen. (Insert your own joke about virtual buttons emulating physical buttons.) Also note that it says “Ausverkauft” under the product — sold out.
So what’s the story here? Lidl, the big chain discounter, sold the Monsieur Cuisine Connect. The connected blender is described in some articles as a Thermomix rival/clone, sold at a fraction of the price.
“Designed in Germany and produced in China, it has a seven-inch touch screen that can be connected via wifi to download recipes for free. And like any device connected to the network, it is susceptible to being hacked. That is precisely what two techies have done, who have disemboweled the robot and discovered important security and privacy issues. The device has a small microphone and a speaker and, in addition, is equipped with Android 6.0, a version that is not updated since October 2017.”
The articles quotes Lidl’s ED of marketing in France to say: “The supermarket chain defended itself arguing that they had foreseen that ‘the device could be controlled by voice and eventually by Alexa, we left the micro, but it is totally inactive and it is impossible to activate it remotely’”.
So what we see here is an undisclosed microphone in a blender, and a machine running an outdated, long insecure OS version. On their website, the manufacturer doesn’t even acknowledge the issue, let alone address it meaningfully. Instead they just set the product to “sold out” in their online shop, which seems a dubious claim at best. It’s a really instructive case study for the field of product development for connected products and IoT in general. Should be (and might become!) mandatory reading for students.
When I first tweeted about this, I claimed — somewhat over-excitedly — that it’s shoddy practice to keep too many feature options open for the future, that this was a main attack vector. I think it’s not totally off, but I want to thank Jeff Katz (always helpful & well informed: a rare, excellent mix of characteristics indeed!) for correcting me and keeping me honest when he pointed out that it’s normal, even good practice in hardware products to put in all the enabling technologies if you have the intention to use it, but you need to be transparent: “The fuckup was not disclosing that it was there, at all (…) Being opaque and shipping old software is far more common an attack vector.” Which is a good point, well made ?
As someone who spent a lot of time and too much money on connected speakers specifically so they would be not Alexa-ready (read: we wanted microphone free speakers), I always find it a little traumatizing to learn about all the embedded mics. But I’m not going to lie: this feels like a losing battle at the moment.
Microphones at the street level
Ok, a strained segue if ever there was one, but here you have it. Brain still in heat meltdown mode! The Globe and Mail covers Sidewalk Labs’ new development plan for the Toronto waterfront they’d like to develop. Spoiler alert: This poster child of smart city development has become the lightning rod for all the opponents of smart cities. They’re facing a lot of push back. (For the record: Rightly so, in my opinion.)
The author identifies multiple issues, from the very concrete to the very meta: Apparently the 1.500+ page document doesn’t answer the big picture questions of what Sidewalk Labs wants in Toronto: What do they really offer, what do they ask for in return?
“It’s remarkable that, after 20 months of public presentations, lobbying and “consultations” by the company – a process that gave it access to public officials that other real estate companies never get – I still don’t know, really, what [Sidewalk Labs chief executive] Doctoroff means.”
Also, given that this is an Alphabet company — and I’d like to stress both Alphabet as the lead actor as well as company as the underlying economic model — the question of handling data is front and center:
“Questions of data privacy and of the economic benefits of neighbourhood-scale data are exceptionally difficult to answer here.”
Smart city scholar (and critic) Anthony Townsend takes it a step further in this direction:
“Data governance has been a lightning rod because its new and scary. Early on, Sidewalk put more energy into figuring out how the robot trash chutes would work than how to control data it and others would collect in the proposed district. As part of Alphabet, you’d think this would have been a source of unique added value versus say, a conventional development. Not so? (…)”
Zooming out, he also wonders if the old narrative of attracting big businesses to boost the local economy for all, sustainably, might have run its course:
“The kinds of companies that want to set up shop in cities, today, the flagships of “surveillance capitalism” as Shoshanna Zuboff calls it, no longer operate like the industrial engines of the past. They source talent and services from all over the world, wherever its cheapest. Being near a big population is more useful because it supports a big airport, than because it provides a big pool of workers. (…) Google, Amazon, and their ilk are more like knowledge blackholes. Ideas and talent go in and they don’t come up, at least in a form usable to others. Seen another way?— it is precisely their ability to contain knowledge spillovers that has powered their success.”
And mayors go along with it, for now, because desperation, digging their own holes deeper and deeper:
“Economic development in cities today is a lot like hunting whales. Mayors try to land big deals that promise lots of jobs. They wield extensive tools, explicitly designed to operate outside of local legislative control, to make the needed concessions to outbid other cities. It’s in many ways a race to the bottom. They all hate it, but they do it.”
I have no answers to any of this. All I can offer is a few pointers that might lead to better approaches over time:
Put citizens first, administrations second, and vendors a distant third
Participatory practices and decision making are key here, and not window dressing
Together, they just might allow us to shift perspective enough to strengthen rather than erode democracy in our cities and beyond.
Currently “reading” with minimal progress: How to Do Nothing in the Attention Economy (Jenny Odell), Exhalations (Ted Chiang), Netter is Better (Thomas Hermann)
If you’d like to work with me in the upcoming months, I have very limited availability, so let’s have a chat!
Next week, before heading off on a summer break, will be the season finale for this newsletter, before picking back up after the summer. In the meantime, it’s a week of crunch time to get everything to a place where I can leave and the teams I’m working with have what they need from me. So, heads down, and onward.
Have a lovely end of the week!
Know someone who might enjoy this newsletter or benefit from it? A shout out to tinyletter.com/pbihr or a forward is appreciated!