Platforms — the big tech, social media type of platforms — have come to be a central battleground for how we work out public opinion. Which also means: They have become a both a battleground for political campaigning, and a focus area of politics.
For a few years now, there had been a consensus of sorts across the political spectrum that acknowledged that platforms specifically, and Big Tech more generally, needed to be reigned in. It was, however, always a bit of an oddball consensus since it all parties got to that same conclusion based on wildly different trajectories and with frankly very little overlap in their analysis or politics.
Where both sides genuinely agreed (I think? Mostly?) that social media can post a threat to national security by serving as a vector for foreign influence and election meddling, and that the monopolistic tendencies of the internet through network effects, lock-in and winner takes all dynamics are anti-competitive and hence not good for innovation.
Beyond that, there wasn’t much agreement. In the broadest of strokes, the left didn’t like the highly neo-liberal, globalist, populist, “we’re untouchable by national governments” tendencies associated with global tech companies. The right didn’t like impediments on free speech, and perceived tech companies to be too left-leaning and woke.
So. For a short while, everyone worked together on “reigning in” Big Tech, whatever that meant. Both in the US and in Europe, we saw major regulatory pushes. Especially when it came to protecting (US and European) elections, platforms for a while stepped up their game in content moderation, trust & safety. That may be about to change.
For one, since Musk has drastically cut back content moderation at X (formerly Twitter), other platforms are learning that there are barely any legal consequences for doing so. (The advertising market is a different story.) There’s not much reason to believe that the lessons of the US 2016 and 2020 elections are going to lead to much better election protection in 2024, and the rest of the world never got that level of attention from the platforms anyway. Here’s what analyst Katie Harbarth says about the current state of play at the platforms (highlights mine):
This is a volatile time, and much will change over the next 18 months. Here’s a look at where things stand:
- Legacy tech companies (Meta, Microsoft, Google/YouTube): This time period reminds me of the first half of 2017. Platforms and the rest of the world were still reeling from the electoral shocks of 2016 – especially Brexit and Trump – and trying to figure out how they would pivot. Today, these platforms are adjusting to the layoffs from the last eight months, the rapid rise and attention to artificial intelligence, and new regulatory pressures. Many have rolled back the policies enacted around COVID and the 2020 elections. Some are just looking ahead to 2024, though many haven’t yet.
So the platforms may neither be quite as ready for 2024, nor as engaged on the issue of protecting elections. On top of that, the broad political agreement that social media in their current form are a potential risk to democracy is crumbling as Musk steers X more explicitly to the right (by for example sharing conspiracy theories, reinstating formerly blocked account including Trump’s, and personally hosting GOP election campaign events).
Harbarth again (highlights mine):
One thing I’ll point out is Morning Consult’s State of Technology Report. I’ll call out this part in particular:
Even in this moment of respite, however, the road ahead for tech will only become more complicated, not less. Generative AI emerged as a new industry and could bring new opportunities but is also fueling concerns among consumers. Politically, a rising share of U.S. Republicans are finding that tech has a positive impact on discourse as X, formerly known as Twitter, cozies up to conservatives. Tech, AI and online discourse are sure to become key areas of focus of the 2024 elections.
So just as we understand better the impacts social media platforms have, and how adversaries use them, and as the body of experience in the Trust & Safety space (incl. content moderation) has grown to new levels, we see teams slashed, and support for broader regulation at risk of crumbling. How generative AI might or might not impact social media is not yet fully understood, and I assume it won’t be understood for a while. At the same time, there are (as always) a slew of major elections coming up, including but not limited to the US and Europe.
So strap in, I suspect we’re in for wild ride over the next two years.