Nuclear disarmament but for surveillance tech?

N

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:03.

Surely, the name Clearview AI has crossed your radar these last couple of weeks since the NYTimes has published this profile. In a nutshell, Clearview is a small company that has built a facial recognition system that is seeded with a ton of faces that were scraped from the big social media sites, and builds from there through user uploads. Take a photo of a person, see all their matches in photos, social media sites, etc. etc.

It’s clearly a surveillance machine, and just as clearly doesn’t bother with things like consent.

Maybe unsurprisingly, the main customer base is law enforcement, even though Clearview is entirely proprietary (i.e. no oversight of how it works), unstudied (there’s been no meaningful research to examine essential things like how many false positives the service generates) and quite possibly illegal (no consent from most people in that database).

The thing is: It’s now pretty simple to build something like this. So we’ll see many more just like it, unless we do something about it.

In other areas, as a society we recognized a disproportionate risk and built regulatory and enforcement mechanisms to prevent or at least manage those risks. Sometimes this works better than at other times, but this is how we got to nuclear disarmament, for example. Not a perfect system for sure, but it’s been kinda-sorta working. And kinda-sorta is sometimes really all you can hope for. At least we haven’t seen any rogue nuclear warheads explode in all the years since 1945 — so that’s a success.

Now, facial recognition isn’t as immediately deadly as a nuclear bomb. But it’s very, very bad at scale. The potential for abuse is consistently so big that we might as well ban it outright. And where the immediate danger is lower than that from a nuclear bomb, the barrier to entry is just that much lower: The tooling and the knowledge is there, the training data is there, it’s near-trivial to launch this type of product now. So we have to expect this to be a constant pretty bad threat any day for the rest of our lives.

So what mechanisms do we have to mitigate those risks? I argue we need an outright ban, also and especially in security contexts. GDPR and its mechanisms for requiring consent point in the right direction. The EU’s AI strategy (as per the recently leaked documents) consider such a ban, but with a (silly, really) exception for use in security contexts.

But let’s look at this and consider the real implications of facial recognition as a category: Otherwise, we’ll be playing whack-a-mole for decades.

Add comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.