Categorydigital rights

New Report: “Smart Cities: A Key to a Progressive Europe”

N

I’m happy to share that a report is out today that I had the honor and pleasure to co-author. It’s published jointly by the Foundation for European Progressive Studies (FEPS) and the Cooperation Committee of the Nordic Labour Movement (SAMAK).

The report is called “A Progressive Approach to Digital Tech — Taking Charge of Europe’s Digital Future.”

In FEPS’s words:

This report tries to answer the question how progressives should look at digital technology, at a time when it permeates every aspect of our lives, societies and democracies. (…)
The main message: Europe can achieve a digital transition that is both just and sustainable, but this requires a positive vision and collective action.

At its heart, it’s an attempt to outline a progressive digital agenda for Europe. Not a defensive one, but one that outlines a constructive, desirable approach.

My focus was on smart cities and how a progressive smart city policy could look like. My contribution specifically comes in the form of a stand-alone attachment titled:

“Smart Cities: A Key to a Progressive Europe”

I’d love to hear what you think. For now, enjoy the report!

Trustmarks, trustmarks, trustmarks

T

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:08.

A couple of years ago, with ThingsCon and support from Mozilla, we launched a trustmark for IoT: The Trustable Technology Mark.

While launching and growing the Trustable Technology Mark hasn’t been easy and we’re currently reviewing our setup, we learned a lot during the research and implementation phase. So occasionally, others will ping us for some input on their own research journey. And since we learned what we learned, to a large degree, from others who generously shared their insights and time with us while we did our own initial research (Alex, Laura, JP: You’re all my heroes!), we’re happy to share what we’ve learned, too. After all, we all want the same thing: Technology that’s responsibly made and respects our rights.

So I’m delighted to see that one of those inputs we had the opportunity to give led to an excellent report on trustmarks for digital technology published by NGI Forward: Digital Trustmarks (PDF).

It’s summarized well on Nesta’s website, too: A trustmark for the internet?

The report gives a comprehensive look at why a trustmark for digital technology is very much needed, where the challenges and opportunities lie, and it offers pathways worth exploring.

Special thanks to author Hessy Elliott for the generous acknowledgements, too.

App In, Driver Out: Why Digital Concierge Services aren’t Quite the Future

A

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:07.

Two things, side by side, in friction:

(1) App In, Driver Out

One of the most dominant business models of the last 5-10 years has been what I think of as app in, driver out: All kinds of services re-packaged to accept orders via an app, and deliver the service through a driver.

Uber may be the most prominent but certainly isn’t alone. We’ve seen the same for food delivery, for laundry pick-up and delivery, for really all kinds of things.

It’s essentially a concierge service, and hence a pure luxury offering. Offered, in this context, with a digital component and offered extremely cheaply.

(2) Innovation trickle-down from elites

The Varian Rule (2011) states that “A simple way to forecast the future is to look at what rich people have today; middle-income people will have something equivalent in 10 years, and poor people will have it in an additional decade.” Which is, by the way, just a way to rephrase William Gibson’s famous line “The future is already here — it’s just unevenly distributed”, in which he’s been exploring the ways that elites have access to innovation before the mainstream does. (Since William Gibson has been quoted saying this since the early 1990s and I like him better than Varian, I’ll stay loyal to his version.)

But there’s definitively something there, to a degree. Elites — financial or technological — have early access to things that aren’t yet available or affordable to the mainstream but might be soon. In fact, not just elites. As Alipasha rightfully points out, it’s not just elites where innovation manifests, but the edges of society more generally: subcultures, special needs, street fashion, you name it.

What shape that mainstreaming might take, if it’s the real thing or some watered-down version, is always hard to predict. Commercial air travel was certainly only affordable to elites first, then later to everybody — in this case, the essential product was the same even though the experience differed along a spectrum of convenience. Personal assistants are available to elites, yet their mainstream versions — digital assistants — are nowhere near the real deal: They’re totally different in nature and deliver a completely different value, if any.

What type of future do we want?

So where does that leave us? Turns out that this type of trickle down only works if there are products that can get cheaper because of production, or through automation. This is, surprisingly, exactly what you’d think intuitively. There’s no surprise here at all! So that’s great.

Unless those services are fully digitized or automated through things like autonomous delivery vehicles, these on-demand services simply cannot reproduce this level of concierge service.

We can digitally model the input side: app-based input can be made convenient and powerful enough. But we can’t lower the costs on the output side enough, without massively externalizing costs to the environment: Automating delivery through, say, drones, might theoretically work but at scale would unleash its own special kind of hell into the urban landscape. And unless we want to go the route of exploitation, humans need to get paid a living wage, so there are no savings to be had there, either.

Extra services cost extra money, which is why these app in, driver out services crumble all over.

So if personalized, cheap laundry delivery might sound too good to be true, that might be because it is. While I’d enjoy the service, I don’t think I’m willing to pay these externalized costs. This wouldn’t be a future I want.

Cost-benefit analysis, Data-Driven Infrastructure edition

C

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:04.

It’s a common approach for making (business, policy…) decision by performing a cost-benefit analysis of some sort. Sometimes this is done via a rigorous process, sometimes it’s ballparked — and depending on the context, that’s OK.

One thing is pretty constant: In a cost-benefit analysis you traditionally work on the basis of reasonably expected costs and reasonably expected benefits. If the benefits outweigh the costs, green light.

Now, I’d argue that for data-driven infrastructure(-ish) projects, we need to set a higher bar.

By data-driven infrastructure I mean infrastructure(ish) things like digital platforms, smart city projects, etc. that collect data, process data, feed into or serve as AI or algorithmic decision-making (ADM) systems, etc. This may increasingly include what’s traditionally included under the umbrella of critical infrastructure but extends well beyond.

For this type of data-driven infrastructure (DDI), we need a different balance. Or, maybe even better, we need a more thorough understanding of what can be reasonably expected.

I argue that for DDI, guaranteed improvement must outweigh the worst case scenario risks.

If the last decade has shown us anything, it’s that data-driven infrastructure will be abused to its full potential.

From criminals to commercial and governmental actors, from legitimate and rogue, if there is valuable data then we’ll see strong interests in this honey pot of data. Hence, we need to assume at least some of those actors will get access to it. So whatever could happen when they do — which would differ dramatically depending on which types or which combination of types of actors does, obviously — is what we have to factor in. Also, the opportunity cost and expertise drain and newly introduced dependencies that come with vendor lock-in.

All of this — that level of failure — should be the new “reasonable” expectation on the cost side.

But in order to make semantic capture of the term “reasonable” a little bit harder, I’m proposing to be very explicit about what we mean by this:

So instead of “Let’s compare what happens if things go kinda-sorta OK on the benefit side and only go kinda-sorta wrong on the cost side”, let’s say “the absolutely guaranteed improvements on the benefit side must significantly outweigh the worst case failure modes on the costs side.”

For DDI, let’s work with aggressive-pessimistic scenarios for the costs/risk side, and conservative scenarios for the benefit side. The more critical the infrastructure, the more thorough we need to be.

That should make for a much more interesting debate, and certainly for more insightful scenario planning.

Nuclear disarmament but for surveillance tech?

N

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:03.

Surely, the name Clearview AI has crossed your radar these last couple of weeks since the NYTimes has published this profile. In a nutshell, Clearview is a small company that has built a facial recognition system that is seeded with a ton of faces that were scraped from the big social media sites, and builds from there through user uploads. Take a photo of a person, see all their matches in photos, social media sites, etc. etc.

It’s clearly a surveillance machine, and just as clearly doesn’t bother with things like consent.

Maybe unsurprisingly, the main customer base is law enforcement, even though Clearview is entirely proprietary (i.e. no oversight of how it works), unstudied (there’s been no meaningful research to examine essential things like how many false positives the service generates) and quite possibly illegal (no consent from most people in that database).

The thing is: It’s now pretty simple to build something like this. So we’ll see many more just like it, unless we do something about it.

In other areas, as a society we recognized a disproportionate risk and built regulatory and enforcement mechanisms to prevent or at least manage those risks. Sometimes this works better than at other times, but this is how we got to nuclear disarmament, for example. Not a perfect system for sure, but it’s been kinda-sorta working. And kinda-sorta is sometimes really all you can hope for. At least we haven’t seen any rogue nuclear warheads explode in all the years since 1945 — so that’s a success.

Now, facial recognition isn’t as immediately deadly as a nuclear bomb. But it’s very, very bad at scale. The potential for abuse is consistently so big that we might as well ban it outright. And where the immediate danger is lower than that from a nuclear bomb, the barrier to entry is just that much lower: The tooling and the knowledge is there, the training data is there, it’s near-trivial to launch this type of product now. So we have to expect this to be a constant pretty bad threat any day for the rest of our lives.

So what mechanisms do we have to mitigate those risks? I argue we need an outright ban, also and especially in security contexts. GDPR and its mechanisms for requiring consent point in the right direction. The EU’s AI strategy (as per the recently leaked documents) consider such a ban, but with a (silly, really) exception for use in security contexts.

But let’s look at this and consider the real implications of facial recognition as a category: Otherwise, we’ll be playing whack-a-mole for decades.

Smart Cities & Human Rights

S

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:02.

If you’ve followed this blog or my work at all, you’ll know that I’ve been doing a fair bit of work around Smart Cities and what they mean from a citizens and rights perspective, and how you can analyze Smart City projects through a lens of responsible technology (which of course has also been the main mission of our non-profit org, ThingsCon).

For years, I’ve argued that we need to not use tech goggles to look at how we can and should connect public space but rather a rights-based perspective. It’s not about what we can do but what we should do, after all.

But while I’m convinced that’s the right approach, it’s been non-trivial to figure out what to base the argument on: What’s the most appropriate foundation to build a “Smart City Rights” perspective on?

A recent conversation led me to sketch out this rough outline which I believe points in the right direction:

Image: A sketch for a basis for a Smart City and Human Rights analytical framework

There are adjacent initiatives and frameworks that can complement and flank anything based on these three (like the Vision for a Shared Digital Europe and its commons-focused approach), and of course this also goes well with the EU’s Horizon 2020 City Mission for Climate-neutral and smart cities. So this is something I’m confident can be fleshed out into something solid.

Are there any other key documents I’m missing that absolutely should be incorporated here?

Category Error: Tracking Ads Are Not a Funding Mechanism

C

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:01.

A note to put this in perspective: This blog post doesn’t pull any punches, and there will be legitimate exceptions to the broad strokes I’ll be painting here. But I believe that this category error is real and has disastrous negative effects. So let’s get right to it.

I’ve been thinking a lot about category errors: The error of thinking of an issue as being in a certain category of problems even though it is primarily in another. Meaning that all analysis and debate will by necessity miss the intended goals until we shift this debate into a more appropriate category.

I increasingly believe that it is a category error to think of online advertising as a means to fund the creation of content. It’s not that online advertising doesn’t fund the creation of content, but this is almost a side effect, and that function is dwarfed by the negative unintended consequences it enables.

When we discuss ads online, it’s usually in the framing of funding. Like this: Ads pay for free news. Or: We want to pay our writers, and ads is how we can do that.

To be clear, these are solid arguments. Because you want to keep news and other types of journalism as accessible to as many people as possible (and it doesn’t get more accessible than “free”). And you do want to pay writers, editors and all the others involved in creating media products.

However.

Those sentences make sense if we consider them in their original context (newspaper ads) as well as the original digital context (banner ads on websites).

There, the social contract was essentially this: I give my attention to a company’s advertisement, and that company gives the media outlet money. It wasn’t a terribly elegant trade in that there are still (at least) three parties involved, but it was pretty straightforward.

Fast forward a few years, and tracking of success of individual ads gets introduced, which happens in steps: First, how many times is this ad displayed to readers? Then, how many readers click it? Then, how many readers click it and then follow through with a purchase? There’s a little more to it, but that’s the basic outline of what was going on.

Fast forward a few years again, and now we have a very different picture. Now, ads place cookies on readers’ devices, and track not just how often they’re been displayed or clicked or how often they convert to a purchase.

Contemporary tracking ads and their various cookies also do things like these: Track if a reader moves the cursor over the ad. Tracks from which website a reader comes to the current website and where they head after. Track the whole journey of any user through the web. Track the software and hardware of any reader’s devices and create unique fingerprints to match with other profiles. Match movement through the web with social media profiles and activities and likes and shares and faves. Track movement of the reader in the physical world. Build profiles of readers combined from all of this and then sell those in aggregate or individually to third parties. Allow extremely granular targeting of advertisements to any reader, down to the level of an ad personalized for just one person. And so on.

This is nothing like the social contract laid out above, even though the language we use is still the same. Here, the implied social contract is more like this: I get to look a little at a website without paying money, and that website and its owner and everyone the owner chooses to make deals with gets to take an in-depth look at all my online behaviors and a significant chunk of my offline behaviors, too, all without a way for me to see what’s going on, or of opting out of it.

And that’s not just a mouthful, it’s also of course not the social contract anyone has signed up for. It’s completely opaque, and there’s no real alternative when you move through the web, unless you really know about operational security in a way that no normal person should ever have to, just in order to read the news.

This micro-targeting is also at the core of what might (possibly, even though we won’t have reliable data on it until it would be too late, if confirmed) undermines and seriously threatens our political discourse. It allows anti-democratic actors of all stripes to spread disinformation (aka “to lie”) without oversight and it’s been shown to be a driver of radicalization by matching supply with demand at a whole new scale.

Even if you don’t want to go all the way to this doomsday scenario, the behavior tracking and nudging that is supposed to streamline readers into just buying more stuff all the time (you probably know how it feels to be chased around the web by an ad for a thing you looked up online once) without a reasonable chance to stop it is, at best, nasty. At worst, illegal under at least GDPR, as a recent study demonstrated. It also creates a surprising — and entirely avoidable! — carbon footprint.

So, to sum up, negative externalities of tracking ads include:

  • micro-targeting used to undermine the democratic process through disinformation and other means;
  • breaches of privacy and data rights;
  • manipulation of user behavior, and negatively impacting user agency;
  • and an insane carbon footprint.

Of course, the major tracking players benefit a great deal financially: Facebook, who make a point of not fact-checking any political ads, i.e. they willingly embrace those anti-democratic activities. Google, who are one of the biggest provider of online ad tracking and also own the biggest browser and the biggest mobile phone operating system, i.e. they collect data in more places than anyone else. And all the data brokers, the shadiest of all shadow industries.

Let me be clear, none of this is acceptable, and it is beyond me that at least parts of it are even legal.

So, where does that lead us?

I argue we need to stop talking about tracking ads as if they were part of our social contract for access to journalism. Instead, we need to name and frame this in the right category:

Tracking ads are not a funding method for online content. Tracking ads are the infrastructure for surveillance & manipulation, and a massive attack vector for undermining society and its institutions.

Funding of online content is a small side-effect. And I’d argue that while we need to fund that content, we can’t keep doing it this way no matter what. Give me dumb (i.e. privacy friendly, non-tracking) ads any day to pay for content. Or, if it helps keep the content free (both financially as well as free of tracking) for others then I think we should also consider paying for more news if we’re in any financial position to do so.

(What we shouldn’t do is just pay for our own privacy and let the rest still be spied on, that’s not really a desirable option. But even that doesn’t currently exist: If you pay for a subscription you’ll still be tracked just like everyone else, only with some other values in your profile, like “has demonstrated willingness to spend money on online content”.)

So, let’s recognize this category error for what it is. We should never repeat these statement again that ads pay for content; they do not. (Digital ad spend goes up and up but over the last 15 years or so newspaper revenue through digital ads have stayed pretty flat, and print collapsed.) Ads online today are almost completely tracking ads, and those are just surveillance infrastructure, period. 

It’s surveillance with a lot of negative impact and some positive side effects. That’s not good enough. So let’s start from there, and build on that, and figure out better models.