Blog

Trustmarks, trustmarks, trustmarks

T

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:08.

A couple of years ago, with ThingsCon and support from Mozilla, we launched a trustmark for IoT: The Trustable Technology Mark.

While launching and growing the Trustable Technology Mark hasn’t been easy and we’re currently reviewing our setup, we learned a lot during the research and implementation phase. So occasionally, others will ping us for some input on their own research journey. And since we learned what we learned, to a large degree, from others who generously shared their insights and time with us while we did our own initial research (Alex, Laura, JP: You’re all my heroes!), we’re happy to share what we’ve learned, too. After all, we all want the same thing: Technology that’s responsibly made and respects our rights.

So I’m delighted to see that one of those inputs we had the opportunity to give led to an excellent report on trustmarks for digital technology published by NGI Forward: Digital Trustmarks (PDF).

It’s summarized well on Nesta’s website, too: A trustmark for the internet?

The report gives a comprehensive look at why a trustmark for digital technology is very much needed, where the challenges and opportunities lie, and it offers pathways worth exploring.

Special thanks to author Hessy Elliott for the generous acknowledgements, too.

App In, Driver Out: Why Digital Concierge Services aren’t Quite the Future

A

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:07.

Two things, side by side, in friction:

(1) App In, Driver Out

One of the most dominant business models of the last 5-10 years has been what I think of as app in, driver out: All kinds of services re-packaged to accept orders via an app, and deliver the service through a driver.

Uber may be the most prominent but certainly isn’t alone. We’ve seen the same for food delivery, for laundry pick-up and delivery, for really all kinds of things.

It’s essentially a concierge service, and hence a pure luxury offering. Offered, in this context, with a digital component and offered extremely cheaply.

(2) Innovation trickle-down from elites

The Varian Rule (2011) states that “A simple way to forecast the future is to look at what rich people have today; middle-income people will have something equivalent in 10 years, and poor people will have it in an additional decade.” Which is, by the way, just a way to rephrase William Gibson’s famous line “The future is already here — it’s just unevenly distributed”, in which he’s been exploring the ways that elites have access to innovation before the mainstream does. (Since William Gibson has been quoted saying this since the early 1990s and I like him better than Varian, I’ll stay loyal to his version.)

But there’s definitively something there, to a degree. Elites — financial or technological — have early access to things that aren’t yet available or affordable to the mainstream but might be soon. In fact, not just elites. As Alipasha rightfully points out, it’s not just elites where innovation manifests, but the edges of society more generally: subcultures, special needs, street fashion, you name it.

What shape that mainstreaming might take, if it’s the real thing or some watered-down version, is always hard to predict. Commercial air travel was certainly only affordable to elites first, then later to everybody — in this case, the essential product was the same even though the experience differed along a spectrum of convenience. Personal assistants are available to elites, yet their mainstream versions — digital assistants — are nowhere near the real deal: They’re totally different in nature and deliver a completely different value, if any.

What type of future do we want?

So where does that leave us? Turns out that this type of trickle down only works if there are products that can get cheaper because of production, or through automation. This is, surprisingly, exactly what you’d think intuitively. There’s no surprise here at all! So that’s great.

Unless those services are fully digitized or automated through things like autonomous delivery vehicles, these on-demand services simply cannot reproduce this level of concierge service.

We can digitally model the input side: app-based input can be made convenient and powerful enough. But we can’t lower the costs on the output side enough, without massively externalizing costs to the environment: Automating delivery through, say, drones, might theoretically work but at scale would unleash its own special kind of hell into the urban landscape. And unless we want to go the route of exploitation, humans need to get paid a living wage, so there are no savings to be had there, either.

Extra services cost extra money, which is why these app in, driver out services crumble all over.

So if personalized, cheap laundry delivery might sound too good to be true, that might be because it is. While I’d enjoy the service, I don’t think I’m willing to pay these externalized costs. This wouldn’t be a future I want.

Car-free cities

C

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:06.

Berlin, like many cities, is somewhat crippled by cars in the city. Less so than, say, New York, London, or Los Angeles, historically. But there are too many cars, and too many cars blocking the bike lanes, where those meaningfully exist. There’s not a week where on my way to work I don’t get almost run over by a car because the bike lanes are crap, and usually serve primarily as illegal-but-unenforced car parking. But I digress.

Over the last year or so, we’ve seen a refreshing, delightful surge of experiments with car-free streets or neighborhoods.

Oslo banned private cars from parts of the inner city and “closed off certain streets in the centre to cars entirely. They have also removed almost all parking spots and replaced them with cycling lanes, benches and miniature parks.”

San Francisco has closed Market Street for private cars: “Only buses, streetcars, traditional taxis, ambulances, and freight drop-offs are still allowed.”

And even in Manhattan, 14th street is blocked now for cars. The NYC rule applies – if you can make it there, you can make it anywhere.

Berlin, of course, hasn’t even started that conversation, so I’ll keep passing ghost bikes in our neighborhood with a dramatic frequency. Accidents involving bikes have gone up dramatically for the last 20 years in Berlin (here’s the official statistic in a basically unreadable PDF) but this number doesn’t even tell half the story: Most accidents are just barely avoided, and hence never show up in those statistics. My personal weekly near-death experience biking to work? Never going to show up there as long as it stays “near-death”.

Most of this would be completely avoidable: Less cars, better bike and pedestrian infrastructure, better public transport. Instead, Berlin is building a highway into the city like it was 1950. We truly have the least progressive leftist government of all times here.

By the way, since I work a lot in the area of Smart Cities: I think Smart Cities have a lot to contribute to urban mobility — but if I’m totally honest, probably a lot less than good old-fashioned intelligent urban planning. We have decades of scientific research in this field, plus thousands of years of history to look at. Yet, somehow we built everything as if the early 20th century model is the default for urban living rather than the total exception that it will likely turn out to be. So let’s get that low-hanging fruit first, and learn to walk before we run.

CityLab argues that car-free cities will soon be the norm. I tend to agree. And I hope we get there sooner rather than later.

Engesser’s Law

E

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:05.

A good friend of mine, back when we were students, studied film, among other things. We’d go to the movie theater frequently. At some point, he jokingly pointed out a useful guideline to me, that I found can be usefully applied way outside of movie theater visits.

So today I present this back to you, paraphrased, as Engesser’s Law:

“If you notice you’ll need to go to the bathroom before the movie ends, the best time to go is when you notice, as the movie will be more interesting at any later stage going forward.”

Because movies are built, obviously, with a dramatic arc that goes up and up and up. So, if you know it’s necessary, now’s better than later. Any later interruption will be disproportionately worse. (If it’s a good movie, that is.)

So I’m taking the liberty to add two corollaries and expand this from the world of movie theaters into everyday life, both work and personal:

First corollary: If you know any task will become urgent later, the best time to finish it is right now.

Second corollary: If you delay finishing a task until it becomes too urgent to further delay, you reduce your own agency and may have created avoidable additional damage to yourself and possibly others

Cost-benefit analysis, Data-Driven Infrastructure edition

C

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:04.

It’s a common approach for making (business, policy…) decision by performing a cost-benefit analysis of some sort. Sometimes this is done via a rigorous process, sometimes it’s ballparked — and depending on the context, that’s OK.

One thing is pretty constant: In a cost-benefit analysis you traditionally work on the basis of reasonably expected costs and reasonably expected benefits. If the benefits outweigh the costs, green light.

Now, I’d argue that for data-driven infrastructure(-ish) projects, we need to set a higher bar.

By data-driven infrastructure I mean infrastructure(ish) things like digital platforms, smart city projects, etc. that collect data, process data, feed into or serve as AI or algorithmic decision-making (ADM) systems, etc. This may increasingly include what’s traditionally included under the umbrella of critical infrastructure but extends well beyond.

For this type of data-driven infrastructure (DDI), we need a different balance. Or, maybe even better, we need a more thorough understanding of what can be reasonably expected.

I argue that for DDI, guaranteed improvement must outweigh the worst case scenario risks.

If the last decade has shown us anything, it’s that data-driven infrastructure will be abused to its full potential.

From criminals to commercial and governmental actors, from legitimate and rogue, if there is valuable data then we’ll see strong interests in this honey pot of data. Hence, we need to assume at least some of those actors will get access to it. So whatever could happen when they do — which would differ dramatically depending on which types or which combination of types of actors does, obviously — is what we have to factor in. Also, the opportunity cost and expertise drain and newly introduced dependencies that come with vendor lock-in.

All of this — that level of failure — should be the new “reasonable” expectation on the cost side.

But in order to make semantic capture of the term “reasonable” a little bit harder, I’m proposing to be very explicit about what we mean by this:

So instead of “Let’s compare what happens if things go kinda-sorta OK on the benefit side and only go kinda-sorta wrong on the cost side”, let’s say “the absolutely guaranteed improvements on the benefit side must significantly outweigh the worst case failure modes on the costs side.”

For DDI, let’s work with aggressive-pessimistic scenarios for the costs/risk side, and conservative scenarios for the benefit side. The more critical the infrastructure, the more thorough we need to be.

That should make for a much more interesting debate, and certainly for more insightful scenario planning.

Monthnotes for January 2020

M

January was mostly for writing, and some scheming for projects yet-to-be-unveiled.

ONGOING WORK

On a lark, I switched back to using project code names. Autonomous Antelope is in the final writing stage. Bamboozling Badger is about to be published. Colorful Caribou still needs a bit of polishing, and for Eerie Eraser, we’re drafting a concept to prototype and test soon.

WHAT’S NEXT?

Lots of writing again this month, then maybe a break, then back into the fray. Looks like in April or so, there’ll be capacity to take on new projects — let’s discuss soon.

Nuclear disarmament but for surveillance tech?

N

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:03.

Surely, the name Clearview AI has crossed your radar these last couple of weeks since the NYTimes has published this profile. In a nutshell, Clearview is a small company that has built a facial recognition system that is seeded with a ton of faces that were scraped from the big social media sites, and builds from there through user uploads. Take a photo of a person, see all their matches in photos, social media sites, etc. etc.

It’s clearly a surveillance machine, and just as clearly doesn’t bother with things like consent.

Maybe unsurprisingly, the main customer base is law enforcement, even though Clearview is entirely proprietary (i.e. no oversight of how it works), unstudied (there’s been no meaningful research to examine essential things like how many false positives the service generates) and quite possibly illegal (no consent from most people in that database).

The thing is: It’s now pretty simple to build something like this. So we’ll see many more just like it, unless we do something about it.

In other areas, as a society we recognized a disproportionate risk and built regulatory and enforcement mechanisms to prevent or at least manage those risks. Sometimes this works better than at other times, but this is how we got to nuclear disarmament, for example. Not a perfect system for sure, but it’s been kinda-sorta working. And kinda-sorta is sometimes really all you can hope for. At least we haven’t seen any rogue nuclear warheads explode in all the years since 1945 — so that’s a success.

Now, facial recognition isn’t as immediately deadly as a nuclear bomb. But it’s very, very bad at scale. The potential for abuse is consistently so big that we might as well ban it outright. And where the immediate danger is lower than that from a nuclear bomb, the barrier to entry is just that much lower: The tooling and the knowledge is there, the training data is there, it’s near-trivial to launch this type of product now. So we have to expect this to be a constant pretty bad threat any day for the rest of our lives.

So what mechanisms do we have to mitigate those risks? I argue we need an outright ban, also and especially in security contexts. GDPR and its mechanisms for requiring consent point in the right direction. The EU’s AI strategy (as per the recently leaked documents) consider such a ban, but with a (silly, really) exception for use in security contexts.

But let’s look at this and consider the real implications of facial recognition as a category: Otherwise, we’ll be playing whack-a-mole for decades.