So… we’re heading towards a major election year. More specifically, a major election year, with something like 50 elections globally, including the US presidential elections and the European Parliament. That’s a lot of democracy at work. Looking at it from a slightly more sinister angle, it’s also a number of high value targets for bad actors.
The Center for Democracy and Technology (CDT) has a recent report out (Seismic Shifts, PDF) that looks at the state of counter-election influence measures. I’ll include the full title of the report since it sets the tone well:
Seismic Shifts: How Economic, Technological, and Political Trends are Challenging Independent Counter-Election-Disinformation Initiatives in the United States
More simply put: Where are we in terms of defending democracy against attacks, specifically on social media platforms? The answer is sobering. After a terrible 2016 US election cycle, awareness and salience of disinformation online shot up, with the effect that during the 2020 US elections there were pretty strong measures in place. The information environment was much better understood, we kinda-sorta knew where attacks came from and what form they might take. There was a learning curve, but insights were gained and applied. Of course the same holds true for the bad actors.
Fast forward to 2024. Paraphrasing some of the key insights of the CDT report, a number of things happened after 2020. Musk bought Twitter and slashed the trust & safety team in what appears to be an ideological move. Tech firms laid off a ton of people, including from trust & safety and election integrity teams. The political climate became even more charged, especially in the US. Researchers and activists and staffers working on these issues face a lot of pressure, including threats (legal and otherwise). And Musk’s dismantling of Twitter’s teams and policies gave the other platforms some level of cover to also weaken their own policies regarding misinfo/disinfo.
All in all, this means that the readiness in terms of election integrity hasn’t continue to improve like it did 2016 to 2020, but instead somewhat regressed. We’re going into 2024 less ready, with less protection, with fewer resources, with weaker policies. The report suggests a few things funders specifically can do to support those researchers at the frontlines, including to work with research institutions and nonprofits to create “shared resources and practices for researchers under attack. These might include pools for legal defense, cybersecurity assistance, and proactively developed communications plans for responding to coordinated attacks.”
So… we’re experiencing some turbulence. Please return to your seats and fasten your seatbelts.
- Yoel Roth, former Twitter head of Trust & Safety and expert extraordinnaire, put together and published online a syllabus for a course titled “Governing the internet: Critical perspectives on online trust and safety” and it’s so, so good to see these types of resources shared online.
- Rafael Behr has a great piece contextualizing the UK’s AI Summit https://www.theguardian.com/commentisfree/2023/nov/01/rishi-sunaks-ai-safety-britains-brexit-dilemmas-elon-musk
- Speaking of AI regulation news, the US hasn’t been sleeping on this either and is (maybe for the first time?) competing on regulation, which is unusual. Biden just signed an executive order on AI (fact sheet), VP Harris announced a new initiative to advance safe and responsible use of AI (fact sheet). This initiative also includes a philanthropic spend of $200 million to fund work at the intersection of AI and democracy, workers’ rights, international rules, and more from the likes of Mozilla, Ford, Open Society, MacArthur, Packard and others (Mozilla press release)
Note: This is cross-posted from my newsletter in an attempt to both to make it easier to read this via RSS feed and to have this in my own independent archives. You can subscribe here to get it delivered to your inbox.
Header image “election integrity” generated by DALL-E 3.