CategoryIoT

New Report: “Smart Cities: A Key to a Progressive Europe”

N

I’m happy to share that a report is out today that I had the honor and pleasure to co-author. It’s published jointly by the Foundation for European Progressive Studies (FEPS) and the Cooperation Committee of the Nordic Labour Movement (SAMAK).

The report is called “A Progressive Approach to Digital Tech — Taking Charge of Europe’s Digital Future.”

In FEPS’s words:

This report tries to answer the question how progressives should look at digital technology, at a time when it permeates every aspect of our lives, societies and democracies. (…)
The main message: Europe can achieve a digital transition that is both just and sustainable, but this requires a positive vision and collective action.

At its heart, it’s an attempt to outline a progressive digital agenda for Europe. Not a defensive one, but one that outlines a constructive, desirable approach.

My focus was on smart cities and how a progressive smart city policy could look like. My contribution specifically comes in the form of a stand-alone attachment titled:

“Smart Cities: A Key to a Progressive Europe”

I’d love to hear what you think. For now, enjoy the report!

Trustmarks, trustmarks, trustmarks

T

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:08.

A couple of years ago, with ThingsCon and support from Mozilla, we launched a trustmark for IoT: The Trustable Technology Mark.

While launching and growing the Trustable Technology Mark hasn’t been easy and we’re currently reviewing our setup, we learned a lot during the research and implementation phase. So occasionally, others will ping us for some input on their own research journey. And since we learned what we learned, to a large degree, from others who generously shared their insights and time with us while we did our own initial research (Alex, Laura, JP: You’re all my heroes!), we’re happy to share what we’ve learned, too. After all, we all want the same thing: Technology that’s responsibly made and respects our rights.

So I’m delighted to see that one of those inputs we had the opportunity to give led to an excellent report on trustmarks for digital technology published by NGI Forward: Digital Trustmarks (PDF).

It’s summarized well on Nesta’s website, too: A trustmark for the internet?

The report gives a comprehensive look at why a trustmark for digital technology is very much needed, where the challenges and opportunities lie, and it offers pathways worth exploring.

Special thanks to author Hessy Elliott for the generous acknowledgements, too.

Cost-benefit analysis, Data-Driven Infrastructure edition

C

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:04.

It’s a common approach for making (business, policy…) decision by performing a cost-benefit analysis of some sort. Sometimes this is done via a rigorous process, sometimes it’s ballparked — and depending on the context, that’s OK.

One thing is pretty constant: In a cost-benefit analysis you traditionally work on the basis of reasonably expected costs and reasonably expected benefits. If the benefits outweigh the costs, green light.

Now, I’d argue that for data-driven infrastructure(-ish) projects, we need to set a higher bar.

By data-driven infrastructure I mean infrastructure(ish) things like digital platforms, smart city projects, etc. that collect data, process data, feed into or serve as AI or algorithmic decision-making (ADM) systems, etc. This may increasingly include what’s traditionally included under the umbrella of critical infrastructure but extends well beyond.

For this type of data-driven infrastructure (DDI), we need a different balance. Or, maybe even better, we need a more thorough understanding of what can be reasonably expected.

I argue that for DDI, guaranteed improvement must outweigh the worst case scenario risks.

If the last decade has shown us anything, it’s that data-driven infrastructure will be abused to its full potential.

From criminals to commercial and governmental actors, from legitimate and rogue, if there is valuable data then we’ll see strong interests in this honey pot of data. Hence, we need to assume at least some of those actors will get access to it. So whatever could happen when they do — which would differ dramatically depending on which types or which combination of types of actors does, obviously — is what we have to factor in. Also, the opportunity cost and expertise drain and newly introduced dependencies that come with vendor lock-in.

All of this — that level of failure — should be the new “reasonable” expectation on the cost side.

But in order to make semantic capture of the term “reasonable” a little bit harder, I’m proposing to be very explicit about what we mean by this:

So instead of “Let’s compare what happens if things go kinda-sorta OK on the benefit side and only go kinda-sorta wrong on the cost side”, let’s say “the absolutely guaranteed improvements on the benefit side must significantly outweigh the worst case failure modes on the costs side.”

For DDI, let’s work with aggressive-pessimistic scenarios for the costs/risk side, and conservative scenarios for the benefit side. The more critical the infrastructure, the more thorough we need to be.

That should make for a much more interesting debate, and certainly for more insightful scenario planning.

Smart Cities & Human Rights

S

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:02.

If you’ve followed this blog or my work at all, you’ll know that I’ve been doing a fair bit of work around Smart Cities and what they mean from a citizens and rights perspective, and how you can analyze Smart City projects through a lens of responsible technology (which of course has also been the main mission of our non-profit org, ThingsCon).

For years, I’ve argued that we need to not use tech goggles to look at how we can and should connect public space but rather a rights-based perspective. It’s not about what we can do but what we should do, after all.

But while I’m convinced that’s the right approach, it’s been non-trivial to figure out what to base the argument on: What’s the most appropriate foundation to build a “Smart City Rights” perspective on?

A recent conversation led me to sketch out this rough outline which I believe points in the right direction:

Image: A sketch for a basis for a Smart City and Human Rights analytical framework

There are adjacent initiatives and frameworks that can complement and flank anything based on these three (like the Vision for a Shared Digital Europe and its commons-focused approach), and of course this also goes well with the EU’s Horizon 2020 City Mission for Climate-neutral and smart cities. So this is something I’m confident can be fleshed out into something solid.

Are there any other key documents I’m missing that absolutely should be incorporated here?

Just enough City

J

In this piece, I’m advocating for a Smart City model based on restraint, and focused first and foremost on citizen needs and rights.

A little while ago, the ever-brilliant and eloquent Rachel Coldicutt wrote a piece on the role of public service internet, and why it should be a model of restraint. It’s titled Just enough Internet, and it resonated deeply with me. It was her article that inspired not just this piece’s title but also its theme: Thank you, Rachel!

Rachel argues that public service internet (broadcasters, government services) shouldn’t compete with commercial competitors by commercial metrics, but rather use approaches better suited to their mandate: Not engagement and more data, but providing the important basics while collecting as little as possible. (This summary doesn’t do Rachel’s text justice, she makes more, and more nuanced points there, so please read her piece, it’s time well spent.)

I’ll argue that Smart Cities, too, should use an approach better suited to their mandate—an approach based on (data) restraint, and on citizens’ needs & rights.

This restraint and reframing is important because it prevents mission creep; it also alleviates the carbon footprint of all those services.

Enter the Smart City

Wherever we look on the globe, we see so-called Smart City projects popping up. Some are incremental, and add just some sensors. Others are blank slate, building whole connected cities or neighborhoods from scratch. What they have in commons is that they mostly are built around a logic of data-driven management and optimization. You can’t manage what you can’t measure, management consultant Peter Drucker famously said, and so Smart Cities tend to measure… everything. Or so they try.

Of course, sensors only measure so many things, like physical movement (of people, or goods, or vehicles) through space, or the consumption and creation of energy. But thriving urban life is made up of many more things, and many of those cannot be measured as easily: Try measuring opportunity or intention or quality of life, and most Smart City management dashboards will throw an error: File not found.

The narrative of the Smart City is based fundamentally that of optimizing a machine to run as efficiently as possible. It’s neoliberal market thinking in its purest form. (Greenfield and Townsend and Morozov and many other Smart City critics have made those points much more eloquently before.) But that doesn’t reflect urban life. The human side of it is missing, a glaring hole right in the center of that particular vision.

Instead of putting citizens in that spot in the center, the “traditional” Smart City model aims to build better (meaning: more efficient, lower cost) services to citizens by collecting, collating, analyzing data. It’s the logic of global supply chains and predictive maintenance and telecommunications networks and data analytics applied to the public space. (It’s no coincidence of the large tech vendors in that space come from either one of those backgrounds.)

The city, however, is no machine to be run at maximum efficiency. It’s a messy agora, with competing and often conflicting interests, and it needs slack in the system: Slack and friction all increase resilience in the face of larger challenges, as do empowered citizens and municipal administrations. The last thing any city needs is to be fully algorithmically managed at maximum efficiency just to come to a grinding halt when — not if! — the first technical glitch happens, or some company ceases their business.

Most importantly, I’m convinced that depending on context, collecting data in public space can be a fundamental risk to a free society—and that it’s made even worse if the data collection regime is outside of the public’s control.

The option of anonymity plays a crucial role for the freedom of assembly, of organizing, of expressing thoughts and political speech. If sensitive data is collected in public space (even if it’s not necessarily personably identifiable information!) then the trust in the collecting entity needs to be absolute. But as we know from political science, the good king is just another straw man, and that given the circumstance even the best government can turn bad quickly. History has taught us the crucial importance of checks & balances, and of data avoidance.

We need a Smart City model of restraint

Discussing publicly owned media, Rachel argues:

It’s time to renegotiate the tacit agreement between the people, the market and the state to take account of the ways that data and technology have nudged behaviours and norms and changed the underlying infrastructure of everyday life.

This holds true for the (Smart) City, too: The tacit agreement between the people, the market and the state is that, roughly stated, the government provides essential services to its citizens, often with the help of the market, and with the citizens’ interest at the core. However, when we see technology companies lobby governments to green-light data-collecting pilot projects with little accountability in public space, that tacit agreement is violated. Not the citizens’ interests but those multinationals’ business models move into the center of these considerations.

There is no opt-out in public space. So when collecting meaningful consent to the collection of data about citizens is hard or impossible, that data must not be collected, period. Surveillance in public space is more often detrimental to free societies than not. You know this! We all know this!

Less data collected, and more options of anonymity in public space, make for a more resilient public sphere. And what data is collected should be made available to the public at little or no cost, and to commercial interests only within a framework of ethical use (and probably for a fee).

What are better metrics for living in a (Smart) City?

In order to get to better Smart Cities, we need to think about better, more complete metrics than efficiency & cost savings, and we need to determine those (and all other big decisions about public space) through a strong commitment to participation: From external experts to citizens panels to digital participation platforms, there are many tools at our disposal to make better, more democratically legitimized decisions.

In that sense I cannot offer a final set of metrics to use. However, I can offer some potential starting points for a debate. I believe that every Smart City projects should be evaluated against the following aspects:

  • Would this substantially improve sustainability as laid out in the UN’s Sustainable Development Goals (SGD) framework?
  • Is meaningful participation built into the process at every step from framing to early feedback to planning to governance? Are the implications clear, and laid out in an accessible, non-jargony way?
  • Are there safeguards in place to prevent things from getting worse than before if something doesn’t work as planed?
  • Will it solve a real issue and improve the life of citizens? If in doubt, cut it out.
  • Will participation, accountability, resilience, trust and security (P.A.R.T.S.) all improve through this project?

Obviously those can only be starting points.

The point I’m making is this: In the Smart City, less is more.

City administrations should optimize for thriving urban live and democracy; for citizens and digital rights — which also happen to be human rights; for resilience and opportunity rather than efficiency. That way we can create a canvas to be painted by citizens, administration and — yes! — the market, too.

We can only manage what we can measure? Not necessarily. Neither the population or the urban organism need to be managed; just given a robust framework to thrive within. We don’t always need real time data for every decision — we can also make good decision based on values and trust in democratic processes, and by giving a voice to all impacted communities. We have a vast body of knowledge from decades of research around urban planning and sociology, and many other areas: Often enough we know the best decisions and it’s only politics that keeps us from enacting them.

We can change that, and build the best public space we know to build. Our cities will be better off for it.

About the author

Just for completeness’ sake so you can see where I’m coming from, I’m basing this on years of working at least occasionally on Smart City projects. My thinking is informed by work around emerging tech and its impact on society, and a strong focus on responsible technology that puts people first. Among other things I’ve co-founded ThingsCon, a non-profit community that promotes responsible tech, and led the development of the Trustable Technology Mark. I was a Mozilla Fellow in 2018-19 and am an Edgeryders Fellow in 2019-20. You can find my bio here.

Data about me in my city

D

This article is a few months old (and in German), but two points of view that I’ll just offer side by side as they pretty much sum up the state of play in smart cities these days.

For context, this is about a smart city partnership in which Huawei implement their technologies in Duisburg, a mid-sized German city with a population of about 0.5 million. The (apparently non-binding) partnership agreement includes Smart Government (administration), Smart Port Logistics, Smart Education (education & schools), Smart Infrastructure, 5G and broadband, Smart Home, and the urban internet of things.

Note: The quotes and paraphrases are roughly translated from the original German article.

Jan Weidenfeld from the Marcator Institute for China Studies:

“As a city administration, I’d be extremely cautious here.” China has a fundamentally different societal system, and a legal framework that means that every Chinese company, including Huawei, is required to open data streams to the communist party. (…)

Weidenfeld points out that 5 years ago, when deliberations about the project began, China was a different country than it is today. At both federal and state levels, the thinking about China has evolved. (…)

“Huawei Smart City is a large-scale societal surveillance system, out of which Duisburg buys the parts that are legally fitting – but this context mustn’t be left out when assessing the risks.”

Anja Kopka, media spokesperson for the city of Duisburg:

The city of Duisburg doesn’t see “conclusive evidence” regarding these security concerns.The data center meets all security requirements for Germany, and is certified as such. “Also, as a municipal administration we don’t have the capacity to reliably assess claims of this nature.” Should federal authorities whose competencies include assessing such issues provide clear action guidelines for dealing with Chinese partners in IT, then Duisburg will adapt accordingly.

The translation is a bit rough around the edges, but I think you’ll get the idea.

With infrastructure, when we see the damage it’s already too late

We have experts warning, but the warnings are of such a structural nature that they’re kinda of to big and obvious to prove. Predators will kill other animals to eat. ????

By the time abuse or any real issue can be proven, it’d be inherently to late to do anything about it. We have a small-ish city administration that knows perfectly well that they don’t have the capacity to do their due diligence, so they just take their partners’ word for it.

The third party here, of course, being a global enterprise with an operating base in a country that has a unique political and legal system that in many ways isn’t compatible with any notion of human rights, let alone data rights, that otherwise would be required in the European Union.

The asymetries in size and speed are vast

And it’s along multiple axes — imbalance of size and speed, and incompatibility of culture — that I think we see the most interesting, and most potentially devastating conflicts:

  • A giant corporation runs circles around a small-to-mid sized city. I think it’s fair to assume that only because of Chinese business etiquette was the CFO of one of Huawei’s business units even flown out to Duisburg to sign the initial memorandum of understanding with Duisburg’s mayor Sören Link. The size and power differential is so ridiculous that it might just as well have been the Head of Sales EMEA or some other mid-level manager that took that meeting. After all, for Chinese standards, a city of a population of a half-million wouldn’t even considered a third tier city. Talk about an uneven playing field.
  • The vast differences of (for lack of a better word, and broadly interpreted) culture in the sense of business realities and legal framework and strategic thinking between a large corporation with global ambitions and backed by a highly centralized authoritarian state on one side, and the day-to-day of a German town are overwhelming. So much so, that I don’t think that the mayor of Duisburg and his team are even aware of all the implicit assumptions and biases they bring to the table.

And it’s not an easy choice at some level: Someone comes in and offers much needed resources that you need and don’t have any chance to get, desperation might force you to make some sub-prime decisions. But this comes at a price — the question is just how bad that price will be over the long term.

I’m not convinced that any smart city of this traditional approach is worth implementing, or even might be worth implementing; probably not. But of all the players, the one backed by a non-democratic regime with a track record of mass surveillance and human rights violations is surely at the bottom of the list.

It’s all about the process

That’s why whenever I speak about smart cities (which I do quite frequently these days), I focus on laying better foundations: We can’t always start from scratch when considering a smart city project. We need a framework as a point of reference, and as a guideline, and it has to make sure to keep us aligned with our values.

Some aspects to take into account here are transparency, accountability, privacy and security (my mnemonic device is TAPS); a foundation based on human and digital rights; and participatory processes from day one.

And just to be very clear: Transparency is not enough. Transparency without accountability is nothing.

Please note that this blog post is based on a previously published item in my newsletter Connection Problem, to which you can sign up here.