Categorypolicy

What type of smart city do we want to live in?

W

Warning: Trick question! The right questions should of course be: What type of city do we want to live in? What parts of our cities do we want to be smart, and in what ways?

That said, this is the talk of my talk for NEXT Conference 2019 in which I explore some basic principles for making sure that if we add so-called smart city technology to our public spaces, we’ll end up with desirable results.

Which type of Smart City do we want to live in?

W

Connectivity changes the nature of things. It quite literally changes what a thing is.

By adding connectivity to, say, the proverbial internet fridge it stops being just an appliance that chills food. It becomes a device that senses; captures, processes and shares information; acts on this processed information. The thing-formerly-known-as-fridge becomes an extension of the network. It makes the boundaries of the apartment more permeable.

So connectivity changes the fridge. It adds features and capabilities. It adds vulnerabilities. At the same time, it also adds a whole new layer of politics to the fridge.

Power dynamics

Why do I keep rambling on about fridges? Because once we add connectivity — or rather: data-driven decision making of any kind — we need to consider power dynamics.

If you’ve seen me speak at any time throughout the last year, chances are you’ve encountered this slide that I use to illustrate this point:

The connected home and the smart city are two areas where the changing power dynamics of IoT (in the larger sense) and data-driven decision making manifest most clearly: The connected home, because it challenges our notions of privacy (in the last 150 years, in the global West). And the smart city, because there is no opting out of public space. Any sensor, any algorithm involved in governing public space impacts all citizens.

That’s what connects the fridge (or home) and the city: Both change fundamentally by adding a data layer. Both acquire a new kind of agenda.

3 potential cities of 2030

So as a thought experiment, let’s project three potential cities in the year 2030 — just over a decade from now. Which of these would you like to live in, which would you like to prevent?

In CITY A, a pedestrian crossing a red light is captured by facial recognition cameras and publicly shamed. Their CitizenRank is downgraded to IRRESPONSIBLE, their health insurance price goes up, they lose the permission to travel abroad.

In CITY B, wait times at the subway lines are enormous. Luckily, your Amazon Prime membership has expended to cover priority access to this formerly public infrastructure, and now includes dedicated quick access lines to the subway. With Amazon Prime, you are guaranteed Same Minute Access.

In CITY C, most government services are coordinated through a centralized government database that identifies all citizens by their fingerprints. This isn’t restricted to digital government services, but also covers credit card applications or buying a SIM card. However, the official fingerprint scanners often fail to scan manual laborers’ fingerprints correctly. The backup system (iris scans) don’t work on too well on those with eye conditions like cataract. Whenever these ID scans don’t work, the government service requests are denied.

Now, as you may have recognized, this is of course a trick question. (Apologies.) Two of these cities more or less exist today:

  • CITY A represents the Chinese smart city model based on surveillance and control, as piloted in Shenzhen or Beijing.
  • CITY C is based on India’s centralized government identification database, Aadhaar.
  • Only CITY B is truly, fully fictional (for now).

What model of Smart City to optimize for?

We need to decide what characteristics of a Smart City we’d like to optimize for. Do we want to optimize for efficiency, resource control, and data-driven management? Or do we want to optimize for participation & opportunity, digital citizens rights, equality and sustainability?

There are no right or wrong answers (even though I’d clearly prefer a focus on the second set of characteristics), but it’s a decision we should make deliberately. One leads to favoring monolithic centralized control structures, black box algorithms and top-down governance. The other leads to decentralized and participatory structures, openness and transparency, and more bottom-up governance built in.

Whichever we build, these are the kinds of dependencies we should keep in mind. I’d rather have an intense, participatory deliberation process that involves all stakeholders than just quickly throwing a bunch of Smart City tech into the urban fabric.

After all, this isn’t just about technology choices: It’s the question what kind of society we want to live in.

The 3 I’s: Incentives, Interests, Implications

T

When discussing how to make sure that tech works to enrich society — rather than extract value from many for the benefit of a few — we often see a focus on incentives. I argue that that’s not enough: We need to consider and align incentives, interests, and implications.

Incentives

Incentives are, of course, mostly thought of as an economic motivator for companies: Maximize profit by lowering costs or offsetting or externalizing it, or charging more (more per unit, more per customer, or simply charging more customers). Sometimes incentives can be non-economic, too, like in the case of positive PR. For individuals, it’s conventionally thought of in the context of consumers trying to get their products as cheaply as possible.

All this of course is based on what in economics is called rational choice theory, a framework for understanding social and economic behavior: “The rational agent is assumed to take account of available information, probabilities of events, and potential costs and benefits in determining preferences, and to act consistently in choosing the self-determined best choice of action.” (Wikipedia) Rational choice theory isn’t complete, though, and might simply be wrong; we know, for example, that all kinds of cognitive biases are also at play in decision-making. The latter is for individuals, of course. But organizations inherently have their own blind spots and biases, too.

So this focus on incentives, while near-ubiquitous, is myopic: While incentives certainly play a role in decision making, they are not the only factor at play. Neither do companies only work towards maximizing profits (I know my own doesn’t, and I daresay many take other interests into account, too). Nor do consumers only optimize their behavior towards saving money (at the expense, say, of secure connected products). So we shouldn’t over-index on getting the incentives right and instead take other aspects into account, too.

Interests

When designing frameworks that aim at a better interplay of technology, society and individual, we should look beyond incentives. Interests, however vaguely we might define those, can clearly impact decision making. For example, if a company (large or small, doesn’t matter) wants to innovate in a certain area, they might willingly forgo large profits and instead invest in R&D or multi-stakeholder dialog. This could help them in their long term prospects through either new, better products (linking back to economic incentives) or by building more resilient relationships with their stakeholders (and hence reducing potential friction with external stakeholders).

Other organizations might simply be mission driven and focus on impact rather than profit, or at least balance both differently. Becoming a B-Corp for example has positive economic side effects (higher chance of retaining talent, positive PR) but more than that it allows the org to align its own interests with those of key stakeholder groups, namely not just investors but also customers and staff.

Consumers, equally, are not unlikely by any means to prioritize price over other characteristics: Organic and Fairtrade food or connected products with quality seals (like our own Trustable Technology Mark) might cost more but offer benefits that others don’t. Interests, rational or not, influence behavior.

And, just as an aside, there are plenty of cases where “irrationally” responsible behavior by an organization (like investing more than legally required in data protection, or protecting privacy better than industry best practice) can offer a real advantage in the market if the regulatory framework changes. I know at least one Machine Learning startup that had a party when GDPR came into effect since all of a sudden, their extraordinary focus on privacy meant they where ahead of the pack while the rest of the industry was in catch-up mode.

Implications

Finally, we should consider the implications of the products coming onto the market as well as the regulatory framework they live under. What might this thing/product/policy/program do to all the stakeholders — not just the customers who pay for the product? How might it impact a vulnerable group? How will it pay dividends in the future, and for whom?

It is especially this last part that I’m interested in: The dividends something will pay in the future. Zooming in even more, the dividends that infrastructure thinking will pay in the future.

Take Ramez Naam’s take on decarbonization — he makes a strong point that early solar energy subsidies (first in Germany, then China and the US) helped drive development of this new technology, which in turn drove the price down and so started a virtuous circle of lower price > more uptake > more innovation > lower price > etc. etc.

We all know what happened next (still from Ramez):

“Electricity from solar power, meanwhile, drops in cost by 25-30% for every doubling in scale. Battery costs drop around 20-30% per doubling of scale. Wind power costs drop by 15-20% for every doubling. Scale leads to learning, and learning leads to lower costs. … By scaling the clean energy industries, Germany lowered the price of solar and wind for everyone, worldwide, forever.”

Now, solar energy is not just competitive. In some parts of the world it is the cheapest, period.

This type of investment in what is essentially infrastructure — or at least infrastructure-like! — pays dividends not just to the directly subsidized but to the whole larger ecosystem. This means significantly, disproportionately bigger impact. It creates and adds value rather than extracting it.

We need more infrastructure thinking, even for areas that are, like solar energy and the tech we need to harvest it, not technically infrastructure. It needs a bit of creative thinking, but it’s not rocket science.

We just need to consider and align the 3 I’s: incentives, interests, and implications.

Monthnotes for November 2018

M

This month: Trustable Technology Mark, ThingsCon Rotterdam, a progressive European digital agenda.

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Trustable Technology Mark

ThingsCon’s trustmark for IoT, the Trustable Technology Mark now has a website. We’ll be soft-launching it with a small invite-only group of launch partners next week at ThingsCon Rotterdam. Over on trustabletech.org I wrote up some pre-launch notes on where we stand. Can’t wait!

ThingsCon Rotterdam

ThingsCon is turning 5! This thought still blows my mind. We’ll be celebrating at ThingsCon Rotterdam (also with a new website) where we’ll also be launching the Trustmark (as mentioned above). This week is for tying up all the loose ends so that we can then open applications to the public.

A Progressive European Digital Agenda

Last month I mentioned that I was humbled (and delighted!) to be part of a Digital Rights Cities Coalition at the invitation of fellow Mozilla Fellow Meghan McDermott (see her Mozilla Fellows profile here). This is one of several threads where I’m trying to extend the thinking and principles behind the Trustable Technology Mark beyond the consumer space, notably into policy—with a focus on smart city policy.

Besides the Digital Rights Cities Coalition and some upcoming work in NYC around similar issues, I was kindly invited by the Foundation for Progressive European Studies (FEPS) to help outline the scope of a progressive European digital agenda. I was more than a little happy to see that this conversation will continue moving forward, and hope I can contribute some value to it. Personally I see smart cities as a focal point of many threads of emerging tech, policy, and the way we define democratic participation in the urban space.

What’s next?

Trips to Rotterdam (ThingsCon & Trustmark), NYC (smart cities), Oslo (smart cities & digital agenda).

If you’d like to work with me in the upcoming months, I have very limited availability but am always happy to have a chat. I’m currently doing the planning for Q2 2019.

Yours truly, P.

Trust Indicators for Emerging Technologies

T

For the Trustable Technology Mark, we identified 5 dimensions that indicate trustworthiness. Let’s call them trust indicators:

  • Privacy & Data Practices: Does it respect users’ privacy and protect their data rights?
  • Transparency: Is it clear to users what the device and the underlying services do and are capable of doing?
  • Security: Is the device secure and safe to use? Are there safeguards against data leaks and the like?
  • Stability: How long a life cycle can users expect from the device, and how robust are the underlying services? Will it continue to work if the company gets acquired, goes belly-up, or stops maintenance?
  • Openness: Is it built on open source or around open data, and/or contributes to open source or open data? (Note: We treat Openness not as a requirement for consumer IoT but as an enabler of trustworthiness.)

Now these 5 trust indicators—and the questions we use in the Trustable Technology Mark to assess them—are designed for the context of consumer products. Think smart home devices, fitness trackers, connected speakers or light bulbs. They work pretty well for that context.

Over the last few months, it has become clear that there’s demand for similar trust indicators for areas other than consumer products like smart cities, artificial intelligence, and other areas of emerging technology.

I’ve been invited to a number of workshops and meetings exploring those areas, often in the context of policy making. So I want to share some early thoughts on how we might be able to translate these trust indicators from a consumer product context to these other areas. Please note that the devil is in the detail: This is early stage thinking, and the real work begins at the stage where the assessment questions and mechanisms are defined.

The main difference between consumer context and publicly deployed technology—infrastructure!—means that we need to focus even most strongly on safeguards, inclusion, and resilience. If consumer goods stop working, there’s real damage, like lost income and the like, but in the bigger picture, failing consumer goods are mostly a quality of life issue; and in the case of consumer IoT space, mostly for the affluent. (Meaning that if we’re talking about failure to operate rather than data leaks, the damage has a high likelihood of being relatively harmless.)

For publicly deployed infrastructure, we are looking at a very different picture with vastly different threat models and potential damage. Infrastructure that not everybody can rely on—equally, and all the time—would not just be annoying, it might be critical.

After dozens of conversations with people in this space, and based on the research I’ve been doing both for the Trustable Technology Mark and my other work with both ThingsCon and The Waving Cat, here’s a snapshot of my current thinking. This is explicitly intended to start a debate that can inform policy decisions for a wide range of areas where emerging technologies might play a role:

  • Privacy & Data Practices: Privacy and good data protection practices are as essential in public space as in the consumer space, even though the implications and tradeoffs might be different ones.
  • Transparency & Accountability: Transparency is maybe even more relevant in this context, and I propose adding Accountability as an equally important aspect. This holds especially true where commercial enterprises install and possibly maintain large scale networked public infrastructure, like in the context of smart cities.
  • Security: Just as important, if not more so.
  • Resilience: Especially for smart cities (but I imagine the same holds true for other areas), we should optimize for Resilience. Smart city systems need to work, even if parts fail. Decentralization, openness, interoperability and participatory processes are all strategies that can increase Resilience.
  • Openness: Unlike in the consumer space, I consider openness (open source, open data, open access) essential in networked public infrastructure—especially smart city technology. This is also a foundational building block for civic tech initiatives to be effective.

There are inherent conflicts and tradeoffs between these trust indicators. But **if we take them as guiding principles to discuss concrete issues in their real contexts, I believe they can be a solid starting point. **

I’ll keep thinking about this, and might adjust this over time. In the meantime, I’m keen to hear what you think. If you have thoughts to share, drop me a line or hit me up on Twitter.

What’s long-term success? Outsized positive impact.

W

For us, success is outsized positive impact—which is why I’m happy to see our work becoming part of Brazil’s National IoT Plan.

Recently, I was asked what long-term success looked like for me. Here’s the reply I gave:

To have outsized positive impact on society by getting large organizations (companies, governments) to ask the right questions early on in their decision-making processes.

As you know, my company consists of only one person: myself. That’s both boon & bane of my work. On one hand it means I can contribute expertise surgically into larger contexts, on the other it means limited impact when working by myself.

So I tend (and actively aim) to work in collaborations—they allow to build alliances for greater impact. One of those turned into ThingsCon, the global community of IoT practitioners fighting for a more responsible IoT. Another, between my company, ThingsCon and Mozilla, led to research into the potential of a consumer trustmark for the Internet of Things (IoT).

I’m very, very happy (and to be honest, a little bit proud, too) that this report just got referenced fairly extensively in Brazil’s National IoT Plan, concretely in Action Plan / Document 8B (PDF). (Here’s the post on Thingscon.com.)

To see your work and research (and hence, to a degree, agenda) inform national policy is always exciting.

This is exactly the kind of impact I’m constantly looking for.