There are two major threat models to consider when discussing algorithmic decision-making systems (ADM) — which I’ll get to in a moment.
The other day, when I spoke to a few students of Utrecht university’s Data-Driven Design master program (which looks ace, by the way), I noticed once more: One of these threat models gets so much more attention than the other.
What threat models are there? Mainly, I believe, these two can serve as “buckets” to sort more concrete concerns:
- What if this ADM system doesn’t work as intended? In other words, what are the unintended consequences of a system failure? This is the bucket that includes algorithmic bias, biased training data and the like: A system that’s not perfect, but generally speaking is valid. The obvious conclusion that tends to come up here is to fix the system. This is important: The system itself is validated, it’s just (more or less little) details that get “fixed”.
The second threat model gets less attention, but I’m convinced it’s both more interesting and more relevant: - What if this ADM system works perfectly as intended? What if, in other words, the system does exactly what it’s intended to do but still causes harm? This is the bucket of (hopefully) unintended consequences. Who might be exposed to additional risks or burdens if the systems works, who might lose agency, who might get disempowered? This question allows a much bigger framing of the issue.
Both of these threat models are important, and both are interesting. Both hint at power dynamics, and how they might change through a technological system.
The first seems easier to tackle at first glance, because (famous last words!) how hard can it be to get better training data, fine-tune the algorithm, etc. Alas, the risk here is to only look at one symptom at a time, thus failing to address the underlying issues — which tend to be much, much larger. After all, biased training data usually is a stand-in for systemic discrimination. But cleaning up a data set seems easier than solving discrimiation, of course.
The second is much more interesting to me, though, simply because it allows us to ask bigger questions. If an ADM system would create serious issues, even just for a few, but maybe for a great many people or groups, then maybe the system simply has no legitimate reason to exist and be implemented. Thus, this gives us a real choice: Should we really use this or drop it altogether? This is the question that gives us agency, a societal choice. It’s a potential answer to techno-determinism.
An example, however oversimplified it might be: Let’s look at facial recognition systems in public space. Currently, these systems tend to trigger all kinds of false positives depending on skin color and other factors. Their training data is, by and large, considered biased. So option 1 would be to fix the training data and the algorithms, to patch the tech. But even if that led to a perfectly fair algorithm (which by definition doesn’t exist) — it wouldn’t matter. Because we know that mass surveillance carries its own cost for society. A healthy democracy needs freedoms, and surveillance creates chilling effects that are detrimental to a heatlhy democracy: People change their behaviors when they assume to be under surveillance. So the real question here would be — threat model 2! — should facial recognition systems be allowed in public space?
That is the type of question we should be asking more often. And we need to be open to possible answers: No is as legitimate an answer as yes — and frequently we might end up with something more complex. Maybe, it depends on the context. This isn’t always satisfying, but it opens the door to real debate. And that debate might take a while. In the mean time, it’s perfectly ok not to implement a questionable ADM system.
Big questions sometimes need time to deliberate. Space to breath. Let’s make sure we make that time and space.
Image: Fritzchens Fritz / Better Images of AI / GPU shot etched 5 / CC-BY 4.0