Categorydigital rights

How to plan & govern a smart city?

H

Taking the publication of Sidewalk Labs’ Master Innovation and Development Plan plan for the smart city development at Toronto’s waterfront (“Toronto Tomorrow”) as an occasion to think out loud about smart cities in general, and smart city governance in particular, I took to Twitter the other day.

If you don’t want to read the whole thing there, here’s the gist: I did a close reading of a tiny (!) section of this giant data dump that is the 4 volume, 1.500+ page Sidewalk Labs plan. The section I picked was the one that John Lorinc highlighted in this excellent article — a couple of tables on page 222 of the last of these 4 volumes, in the section “Supplemental Tables”. This is the section that gets no love from the developers; it’s also the section that deals very explicitly with governance of this proposed development. So it’s pretty interesting. This, by the way, is also roughly my area of research of my Edgeryders fellowship.

On a personal note: It’s fascinating to me how prescient our speakers at Cognitive Cities Conference were back in 2011 – eight years is a long time in this space, and it feels like we invited exactly the right folks back then!

Smart cities & governance: A thorny relationship

In this close reading I focused on exactly that: What does governance mean in a so-called smart city context. What is it that’s being governed and how, and maybe most importantly, by whom?

Rather than re-hash the thread here, just a quick example to illustrate the kind of issues. Where this plan speaks of publicly accessible spaces and decision-making taking into account community input, I argue that we need public spaces and full citizens rights. Defaults matter, and in cities we need the default to be public space and citizens to wield the final decision-making power over their environment. Not even the most benign or innovative company or other non-public entity is an adequate replacement for a democratically elected administration/government, and any but the worst governments — cumbersome as a government might be in some cases — is better than the alternatives.

My arguments didn’t go unnoticed, either. Canadian newspaper The Star picked up my thread on the thorny issue of governance and put it in context of other experts critical of privatizing the urban space; the few others I know from the thread make me think I’m in good company there.

What’s a smart city, anyway?

As a quick, but worthwhile diversion I highly recommend the paper Smart cities as corporate storytelling (Ola Söderström, Till Paasche, Francisco Klauser, published in City vol. 18 (2014) issue 3). In it, the authors trace not just the origin of the term smart cities but also the deliberate framing of the term that serves mostly the vendors of technologies and services in this space, in efficient and highly predictable ways. They base their analysis on IBM’s Smarter City campaign (highlights mine):

”this story is to a large extent propelled by attempts to create an ‘obligatory passage point’ (…) in the transformation of cities into ‘smart’ ones. In other words it is conceived to channel urban development strategies through the technological solutions of IT companies.

These stories are important and powerful:

Stories are important because they provide actors involved in planning with an understanding of what the problem they have to solve is (…). More specifically, they play a central role in planning because they “can be powerful agents or aids in the service of change, as shapers of a new imagination of alternatives.” (….) stories are the very stuff of planning, which, fundamentally, is persuasive and constitutive storytelling about the future.” (…)

The underlying logic is that of a purely data-driven, almost mechanical model of urban management that is overly simplistic and neither political, nor does it require expert matters. This logic is inherently faulty. Essentially, it disposes with the messiness that humans and all their pesky complex socio-cultural issues.

In this approach, cities are no longer made of different – and to a large extent incommensurable – socio-technical worlds (education, business, safety and the like) but as data within systemic processes. (…) As a result, the analysis of these ‘urban themes’ no longer seem to require thematic experts familiar with the specifics of a ‘field’ but only data- mining, data interconnectedness and software-based analysis.

So: Governance poor, underlying logic poor. What could possibly go wrong.

A better way to approach smart city planning

In order to think better, more productively about how to approach smart cities, we need to step back and look at the bigger picture.

If you follow my tweets or my newsletter, you’ll have encountered the Vision for a Shared Digital Europe before. It’s a proposed alternative for anything digital in the EU that would, if adopted, replace the EU’s Digital Single Market (DSM). Where the EU thinks about the internet, it’s through this lens of the DSM — the lens of markets first and foremost. the Vision for a Shared Digital Europe (SDE) however proposes to replace this logic of market first through 4 alternative pillars:

  • Cultivate the Commons
  • Decentralize infrastructure
  • Enable self-determination
  • Empower public institutions

Image: Vision for a Shared Digital Europe (shared-digital.eu)

I think these 4 pillars should hold up pretty well in the smart city planning context. Please note just how different this vision is from what Sidewalk Labs (and the many other smart city vendors) propose:

  • Instead of publicly available spaces we would see true commons, including but not limited to space.
  • Instead of centralized data collection, we might see decentralization, meaning a broader, deeper ecosystem of offerings and more resilience (as opposed to just more efficiency).
  • Instead of being solicited for “community input”, citizens would actively shape and decide over their future.
  • And finally, instead of working around (or with, to a degree) public administrations, a smart city after this school of thought would double down on public institutions and give them a strong mandate, sufficient funding, an in-house capacity to match the industry’s.

It would make for a better, more democratic and more resilient city.

So I just want to put this out there. And if you’d like to explore this further together, please don’t hesitate to ping me.

A list of resources for ethical, responsible, and public interest tech development

A

For an upcoming day of teaching I started compiling a list of resources relevant for the ethical, responsible development of tech, especially public interest tech. This list is very much incomplete, a starting point.

(For disclosure’s sake, I should add that I’ve started lists like this before: I’ll try, but cannot promise, to be maintaining this one. Just assume it’s a snapshot, useful primarily in the now and as an archive for future reference.)

I also can take only very partial credit for it since I asked Twitter for input. I love Twitter for this kind of stuff: Ask, and help shall be provided. My Twitter is a highly curated feed of smart, helpful people. (I understand that for many people Twitter feels very, very different. My experience is privileged that way.) A big thank you in particular to Sebastian Deterding, Alexandra Deschamps-Sonsino, Dr. Laura James, Iskander Smit, and to others I won’t name because they replied via DM and this might have been a privacy-related decision. You know who you are – thank you!

Here are a bunch of excellent starting points to dig deeper, ranging from books to academic papers to events to projects to full blown reading lists. This list covers a lot of ground. You can’t really go wrong here, but choose wisely.

Projects & papers:

Organizations:

Books:

  • Everyware (Greenfield)
  • The Epic Struggle of the Internet of Things (Sterling)
  • Weapons of Math Destruction (O’Neil)
  • Smart Cities (Townsend)
  • Future Ethics (Bowles)

Libraries, reading lists & lists of lists:

Which type of Smart City do we want to live in?

W

Connectivity changes the nature of things. It quite literally changes what a thing is.

By adding connectivity to, say, the proverbial internet fridge it stops being just an appliance that chills food. It becomes a device that senses; captures, processes and shares information; acts on this processed information. The thing-formerly-known-as-fridge becomes an extension of the network. It makes the boundaries of the apartment more permeable.

So connectivity changes the fridge. It adds features and capabilities. It adds vulnerabilities. At the same time, it also adds a whole new layer of politics to the fridge.

Power dynamics

Why do I keep rambling on about fridges? Because once we add connectivity — or rather: data-driven decision making of any kind — we need to consider power dynamics.

If you’ve seen me speak at any time throughout the last year, chances are you’ve encountered this slide that I use to illustrate this point:

The connected home and the smart city are two areas where the changing power dynamics of IoT (in the larger sense) and data-driven decision making manifest most clearly: The connected home, because it challenges our notions of privacy (in the last 150 years, in the global West). And the smart city, because there is no opting out of public space. Any sensor, any algorithm involved in governing public space impacts all citizens.

That’s what connects the fridge (or home) and the city: Both change fundamentally by adding a data layer. Both acquire a new kind of agenda.

3 potential cities of 2030

So as a thought experiment, let’s project three potential cities in the year 2030 — just over a decade from now. Which of these would you like to live in, which would you like to prevent?

In CITY A, a pedestrian crossing a red light is captured by facial recognition cameras and publicly shamed. Their CitizenRank is downgraded to IRRESPONSIBLE, their health insurance price goes up, they lose the permission to travel abroad.

In CITY B, wait times at the subway lines are enormous. Luckily, your Amazon Prime membership has expended to cover priority access to this formerly public infrastructure, and now includes dedicated quick access lines to the subway. With Amazon Prime, you are guaranteed Same Minute Access.

In CITY C, most government services are coordinated through a centralized government database that identifies all citizens by their fingerprints. This isn’t restricted to digital government services, but also covers credit card applications or buying a SIM card. However, the official fingerprint scanners often fail to scan manual laborers’ fingerprints correctly. The backup system (iris scans) don’t work on too well on those with eye conditions like cataract. Whenever these ID scans don’t work, the government service requests are denied.

Now, as you may have recognized, this is of course a trick question. (Apologies.) Two of these cities more or less exist today:

  • CITY A represents the Chinese smart city model based on surveillance and control, as piloted in Shenzhen or Beijing.
  • CITY C is based on India’s centralized government identification database, Aadhaar.
  • Only CITY B is truly, fully fictional (for now).

What model of Smart City to optimize for?

We need to decide what characteristics of a Smart City we’d like to optimize for. Do we want to optimize for efficiency, resource control, and data-driven management? Or do we want to optimize for participation & opportunity, digital citizens rights, equality and sustainability?

There are no right or wrong answers (even though I’d clearly prefer a focus on the second set of characteristics), but it’s a decision we should make deliberately. One leads to favoring monolithic centralized control structures, black box algorithms and top-down governance. The other leads to decentralized and participatory structures, openness and transparency, and more bottom-up governance built in.

Whichever we build, these are the kinds of dependencies we should keep in mind. I’d rather have an intense, participatory deliberation process that involves all stakeholders than just quickly throwing a bunch of Smart City tech into the urban fabric.

After all, this isn’t just about technology choices: It’s the question what kind of society we want to live in.

Living in the New New Normal

L

Image: Unsplash (derveit)

Please note: This post veers a bit outside my usual topics for this blog, so you can read the post in full on Medium.

It’s the year 2019. What’s it like to live in the New New Normal, in a world where the once-disruptive Silicon Valley tech companies (GAFAM) have become the richest, most powerful companies in the world?

In a world in which Chinese tech giants (BAT), too, have reached a level of maturity, and scale, to equal those Silicon Valley companies and are starting to push outside of China and onto the world stage? In which these companies represent not change, innovation and improvement (of the world, or at least the online experience) but the status quo; where they are the entrenched powers defending their positions? In a world that has left the utopian ideas of the early open web (especially openness and decentralization) in the dust, and instead we see an internet that has been consolidated and centralized more than ever?

In other words, what’s it like to live between increasingly restrictive “ecosystems” of vendor lock-in, and the main choice is between the Silicon Valley model and the Chinese model?

Read the full post on Medium.

Monthnotes for October 2018

M

This month: Mozfest, a Digital Rights Cities Coalition, Trustable Technology Mark updates, ThingsCon Rotterdam.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Mozfest

Mozfest came and went, and was lovely as always. It was the 9th Mozfest, 8 or so of which I participated in — all the way back to the proto (or prototyping?) Mozfest event called Drumbeat in Barcelona in, what, 2010? But no time for nostalgia, it was bustling as always. The two things that were different for me that one, I participated as a Mozilla Fellow, which means a different quality of engagement and two, M and I brought the little one, so we had a toddler in tow. Which I’m delighted to say worked a charm!

A Digital Rights Cities Coalition

At Mozfest, the smart and ever lovely Meghan McDermott (see her Mozilla Fellows profile here) hosted a small invite-only workshop to formalize a Digital Rights Cities Coalition — a coalition of cities and civil society to protect, foster, promote digital rights in cities. I was both delighted and honored to be part of this space, and we’ll continue working together on related issues. The hope is that my work with ThingsCon and the Trustable Technology Mark can inform and contribute value to that conversation.

Trustable Technology Mark

The Trustable Technology Mark is hurtling towards the official launch at a good clip. After last month’s workshop weekend at Casa Jasmina, I just hosted a Trustmark session at Mozfest. It was a good opportunity to have new folks take a look at the concept with fresh eyes. I’m happy to report that I walked away with some new contacts and leads, some solid feedback, and an overall sense that at least for the obvious points of potential criticism that present themselves at first glance there are solid answers now as to why this way and not that, etc etc.

Courtesy Dietrich, a photo of me just before kicking off the session wearing a neighboring privacy booth’s stick-on mustache.

Also, more policy and academic partners signing on, which is a great sign, and more leads to companies coming in who want to apply for the Trustmark.

Next steps for the coming weeks: Finalize and freeze the assessment form, launch a website, line up more academic and commercial partners, reach out to other initiatives in the space, finalize trademarks (all ongoing), reach out to press, plan launch (starting to prep these two).

The current assessment form asks a total of 48 questions over 5 dimensions, with a total of 29 required YES’s. Here’s the most up-to-date presentation:


ThingsCon Rotterdam

Our annual ThingsCon conference is coming up: Join us in Rotterdam Dec 6-7!

Early bird is just about to end (?), and we’re about to finalize the program. It’s going to be an absolute blast. I’ll arrive happily (if probably somewhat bleary eyed after a 4am start that day) in Rotterdam to talk Trustable Technology and ethical tech, we’ll have a Trustmark launch party of some sort, we’ll launch a new website (before or right there and then), and we’ve been lining up a group of speakers so amazing I’m humbled even just listing it:

Alexandra Deschamps-Sonsino, Cennydd Bowles, Eric Bezzem, Laura James, Lorenzo Romanoli, Nathalie Kane, Peter Bihr, Afzal Mangal, Albrecht Kurze, Andrea Krajewski, Anthony Liekens, Chris Adams, Danielle Roberts, Dries De Roeck, Elisa Giaccardi, Ellis Bartholomeus, Gaspard Bos, Gerd Kortuem, Holly Robbins, Isabel Ordonez, Kars Alfrink, Klaas Kuitenbrouwer, Janjoost Jullens, Ko Nakatsu, Leonardo Amico, Maaike Harbers, Maria Luce Lupetti, Martijn de Waal, Martina Huynh, Max Krüger, Nazli Cila, Pieter Diepenmaat, Ron Evans, Sami Niemelä, Simon Höher, Sjef van Gaalen.

That’s only the beginning!

Here’s part of the official blurb, and more soon on thingscon.com and thingscon.nl/conference-2018

Now, 5 years into ThingsCon, the need for responsible technology has entered the mainstream debate. We need ethical technology, but how? With the lines between IoT, AI, machine learning and algorithmic decision-making increasingly blurring it’s time to offer better approaches to the challenges of the 21st century: Don’t complain, suggest what’s better! In this spirit, going forward we will focus on exploring how connected devices can be made better, more responsible and more respectful of fundamental human rights. At ThingsCon, we gather the finest practitioners; thinkers & tinkerers, thought leaders & researchers, designers & developers to discuss and show how we can make IoT work for everyone rather than a few, and build trustable and responsible connected technology.

Media, etc.

In the UK magazine NET I wrote an op-ed about Restoring Trust in Emerging Tech. It’s in the November 2018 issue, out now – alas, I believe, print only.

Reminder: Our annual ThingsCon report The State of Responsible IoT is out.

What’s next?

Trips to Brussels, Rotterdam, NYC to discuss a European digital agenda, launch a Trustmark, co-host ThingsCon, translate Trustmark principles for the smart city context, prep a US-based ThingsCon conference.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Yours truly, P.

On Business Models & Incentives

O

We’ve been discussing ethics & responsibility in IoT specifically, and business more generally, a lot lately. This seems more relevant than ever today, simply because we see so much damage done because wrong business models—and hence, wrong incentives—drive and promote horrible decision making.

One blatantly obvious example is Facebook and its focus on user engagement. I’d like to make clear I pick Facebook because it is simply the best known example of an industry-wide trend.

Advertisers are sold on “engagement” as a metric since the web allowed to measure user behavior (ie. what used to be called “Web 2.0”, now “social media”). Before that (early Web), it was various flavors of page impressions as a proxy for reach. Before that (print, TV) it was calculated/assumed reach based on sampling and the size of print runs.

It’s important to keep in mind that these metrics have changed over time, and can change and be changed any time. They aren’t a divine hand-down, nor a constant in the world. They are what we, as an industry and society, make them.

Now, for a few years advertisers have been sold on, and have been overall quite happy with, having their ad efficiency and effectiveness on engagement. This term means how many people didn’t just see their ads, but interacted (“engaged”) with them in one way or another. Typically, this means clicking on them, sharing them on social media or via email, and the like. It’s a strong proxy for attention, which is what advertisers are really after: They want potential customers to notice their messages. It’s hard to argue with that; it’s their job to make sure people notice their ads.

That said, the focus on engagement was driven forcefully by the platforms that profit from selling online ads as a means to differentiate themselves from print and TV media, as well as the online offerings of traditionally print/TV based media. “Look here, we can give you much more concrete numbers to measure how well your ads work”, they said. And they were, by and large, not wrong.

But.

The business model based on engagement turned out to be horrible. Damaging. Destructive.

This focus on engagement means that all incentives of the business are to get people to pay more attention to advertisements, at the expense of everything else. Incentivizing engagement means that the more you can learn about a user, by any means, puts you in a better position to get them to pay attention to your ads.

This is how we ended up with a Web that spies on us, no matter where we go. How we ended up with websites that read us more than we read them. With clickbait, “super cookies”, and fake news. Every one of these techniques are means to drive up engagement. But at what cost?

I truly believe you can’t discuss fake news, the erosion of democracy, online harassment, and populism without discussion online surveillance (aka “ad-tech”, or “surveillance capitalism”) first.

Business models, and the behaviors they incentivize, matter. Facebook and many other online advertisement platforms picked horrible incentives, and we all have been paying the price for it. It’s killing the Web. It’s eroding our privacy, the exchange of ideas, and democracy. Because where our communications channels spy on us, and the worst and most troll-ish (“most engaging”) content floats to the top because of ill-advised and badly checked algorithmic decision-making, we can’t have discussions anymore in public, or even in the spaces and channels that appear to be private.

It doesn’t have to be that way. We can choose our own business models, and hence incentives.

For example, over at ThingsCon we were always wary of relying too much on sponsorship, because it adds another stakeholder (or client) you need to accommodate beyond participants and speakers. We mostly finance all ThingsCon events through ticket sales (even if “financing” is a big word; everything is mostly done by our own volunteer work). Our research is either done entirely in-house out of interest or occasionally as a kind of “researcher-for-hire” commission. We subsidize ThingsCon a lot through our other work. Does that mean we lose some quick cash? Absolutely. Do we regret it? Not in the very least. It allows a certain clarity of mission that wouldn’t otherwise be possible. But I admit it’s a trade-off.

(A note for the event organizers out there: Most of the sponsors we ended up taking on were more than happy to go with food sponsoring, a ticket package, or subsidizing tickets for underrepresented groups—all entirely compatible with participants’ needs.)

If we want to build sustainable businesses—businesses that will sustain themselves and not poison their ecosystem—we need to pick our business models and incentives wisely.

The key challenge for the industry in the next 5 years is consumer trust

T

Note: Every quarter or so I write our client newsletter. This time it touched on some aspects I figured might be useful to this larger audience, too, so I trust you’ll forgive me cross-posting this bit from the most recent newsletter.

Some questions I’ve been pondering and that we’ve been exploring in conversations with our peer group day in, day out.

This isn’t an exhaustive list, of course, but gives you a hint about my headspace?—?experience shows that this can serve as a solid early warning system for industry wide debates, too. Questions we’ve had on our collective minds:

1. What’s the relationship between (digital) technology and ethics/sustainability? There’s a major shift happening here, among consumers and industry, but I’m not yet 100% sure where we’ll end up. That’s a good thing, and makes for interesting questions. Excellent!

2. The Internet of Things (IoT) has one key challenge in the coming years: Consumer trust. Between all the insecurities and data leaks and bricked devices and “sunsetted” services and horror stories about hacked toys and routers and cameras and vibrators and what have you, I’m 100% convinced that consumer trust?—?and products’ trustworthiness?—?is the key to success for the next 5 years of IoT. (We’ve been doing lots of work in that space, and hope to continue to work on this in 2018.)

3. Artificial Intelligence (AI): What’s the killer application? Maybe more importantly, which niche applications are most interesting? It seems safe to assume that as deploying machine learning gets easier and cheaper every day we’ll see AI-like techniques thrown at every imaginable niche. Remember when everyone and their uncle had to have an app? It’s going to be like that but with AI. This is going to be interesting, and no doubt it’ll produce spectacular successes as well as fascinating failures.

4. What funding models can we build the web on, now that surveillance tech (aka “ad tech”) has officially crossed over to the dark side and is increasingly perceived as no-go?

These are all interesting, deep topics to dig into. They’re all closely interrelated, too, and have implications on business, strategy, research, policy. We’ll continue to dig in.

But also, besides these larger, more complex questions there are smaller, more concrete things to explore:

  • What are new emerging technologies? Where are exciting new opportunities?
  • What will happen due to more ubiquitous autonomous vehicles, solar power, crypto currencies? What about LIDAR and Li-Fi?
  • How will the industry adapt to the European GDPR? Who will be the first players to turn data protection and scarcity into a strength, and score major wins? I’m convinced that going forward, consumer and data protection offer tremendous business opportunities.

If these themes resonate, or if you’re asking yourself “how can we get ahead in 2018 without compromising user rights”, let’s chat.

Want to work together? I’m starting the planning for 2018. If you’d like to work with me in the upcoming months, please get in touch.

PS: I write another newsletter, too, in which I share regular project updates, thoughts on the most interesting articles I come across, and where I explore areas around tech, society, culture & business that I find relevant. To watch my thinking unfolding and maturing, this is for you. You can subscribe here.