We see that the spread of AI systems is effectively a forcing function for societal debates: Underlying conflicts are forced to the foreground because these systems will have immediate impact, upping the salience and urgency to address the underlying conflicts.
Please note that I’m using “AI” here in the colloquial sense, i.e. as an umbrella term for machine learning and similar data-driven systems.
AI systems are usually trained on data sets. Data sets in turn are a type of historical record. As goes for any historical record, these ones can be more or less accurate, more or less a record of what’s relevant, etc., in other word, they can be of varying degrees of quality. But what’s recorded and included in those records, and to a degree also how that data is interpreted, is by no means reliably a universal or a universally fair record of history. It’s interpreted by those who are in positions of power: History is written by the winners, as the saying goes.
So: Data sets represent both an incomplete slice of history, and an interpretation of history.
Meaning: Data sets tend to replicate historically grown power structures. Thus, by implementing AI systems trained on those data sets, there’s a very real risk of reinforcing existing power structures even if they’re fundamentally unfair and unjust.
Which is why, when AI systems are introduced, societal conflicts that may otherwise be ongoing, but quietly so, suddenly are catapulted to a whole new level of urgency — because the best time to stop injustice from happening is before it’s happening. Once AI systems become part of the infrastructure, of the fabric of society, other systems adapt to them, build on top of them, etc. Negative impacts can best be avoided by prevention, not by fixing them after the fact.
Long story short: I think the idea of AI systems as a forcing function for societal debate can be a very helpful lens for analysis.