Here’s a story about how democracy, our very way of organizing our society and our lives, is under existential threat — because our public information sphere, our media environment, the way we collectively, in aggregate, make sense of the world is in poor health, and under attack from without and from within. Let’s go!
(I)
PREMISES
To understand how our digital information systems are undermining democracy, we first need to establish some common ground. Let’s begin with three premises that underpin my arguments:
- Democracy is the most desirable governance model.
- Democracy requires rights, rule of law, free discourse, informed citizens (among other things), and for our purposes today: a healthy public information sphere, i.e. a healthy media environment. I’ll be using both terms interchangeably to include media + social media + the internet. This public info sphere is how we make sense of the world. We don’t have to agree on all positions, but we need to navigate a shared understanding of things, establish a shared reality.
- A healthy public info sphere requires a diverse, healthy media ecosystem including investigative journalism. Investigative journalism does not have a business model, it needs to be financed through other means. It requires venues for public discourse to take place where you can reasonably expect to have free speech, and where you can reasonably expect not to be harassed. These two can be in tension, but not as much as it might seem. We’ll get to that later. It also requires trained journalists who get paid for their work, which sounds banal, but it’s important.
If the public info sphere is unhealthy, democracy is unhealthy.
(II)
WHAT CHANGED: A NEW MEDIA WORLD
With those premises in place, the next step is to understand what changed— and why our digital public sphere is so dysfunctional today. A quick, simplified history insert to set the scene and introduce all the important characters.
Spreading information at scale used to not be a thing, with the possible exception of the catholic church and maybe kings or queens. This started changing with Gutenberg’s printing press in the mid-15th century.[^1]
The printing press democratized (to a degree) spreading information at scale (to a degree): People could hire printers to make leaflets and distribute them. This took some time to have impact because things (like printing presses) and information (like how and why to make printing presses) took time to spread, and because of other limiting factors, like not that many people were literate. Or had any idea what to use a printing press for. Probably the most notable early use of the printing press was Martin Luther spreading his 95 theses, and this was 77 years after Gutenberg invented the printing press. (And that was within just a few hundred kilometers as the invention of the printing press, so the technology didn’t have to spread very far!) So, things moved slower then. Still fast, but much much slower than today. Newspapers soon started spreading, marking the birth of the first mass medium.
450 years or so later, radio was invented: The first broadcast medium and the beginning of a new era: mass media could now be live, and they could be audio. It took some time for radios to become cheap enough to be in most households, but by the 1920s or so it was the first mass medium other than newspapers. Voices in your home from far away? Music without the physical presence of a band? Exciting! TV followed just a few decades later.
Both broadcast media were transformative. They also required the kind of infrastructure — and hence investment — that meant there were only so many parties able to participate. So it was pretty centralized, and few participants meant: It was easy to regulate. Governments knew who to talk to.
Fast forward to the early (website-based) internet of the late 1990s and early 2000s. It was highly decentralized, everybody could publish easily and cheaply. Distribution and discovery were, however, quite limited: Publishing was easy, but finding people to read your content (it was mostly text-based due to bandwidth limitations) was not easy. There was no algorithm or feed to push content to users: They needed to actively seek out a website to read it. (A note to younger readers: This did not seem so outrageous at the time. People were used to having to go to stores to pick things up back then because Amazon didn’t exist either. You also could only access the internet from stationary computers.) For context: Around this time, Google was born (1998). Wikipedia, too (2001).
Then things started to pick up steam with the arrival of social media. (Sidenote: I believe we’re seeing the end of a chapter of that era, but that’s a conversation for another day.) Social media as type of many-to-many conversations, of disintermediated discourse, without gatekeepers, out in public, online — this conceptually started in the early internet but really coalesced into something real starting with what was then called Web 2.0. Concretely, with a creative explosion, a wave of a new type of services out of which today only a few are really still household names: Youtube (2005), Facebook (open to the wider public in 2006), Twitter (2006), Instagram (2010).
First, social media was fairly decentralized, but connected through protocols and APIs. Lots of tech-savvy people launched new services, and due to these interoperability features users could move (themselves, their data, their networks and friend graphs) pretty freely among them. Content feeds were born. But these were simple feeds: unsorted, non-algorithmic, chronological. Around this time, iPhones launched (2007), Android and other smartphones followed quickly. The internet now lived in your pants pockets and came along to wherever you went. It also now knew where you were, gaining context about you: The so-called local-social-mobile internet was born.
“Content feeds were born. But these were simple feeds: unsorted, non-algorithmic, chronological.”
Personalized ads emerged as the dominant business model. And what a business model it was. It grew so much that it subsumed everything else. It grew like a cancer. Because there was infinite money to be made, Big Tech companies wanted to serve more personalized ads.
To serve more and better personalized ads, you need:
- better tracking of users’ interests and behavior, so we got tracking infrastructure that follows users even outside the services they’re using
- more space to serve ads on, so people needed to be held on services for as long as possible
And just like that, we were in Era of Engagement Over Everything Else. This changes everything.
A few years later, the ad, search and social media companies are the most valuable companies in the history of the world. Very much against modern-day tradition, however, when they IPO’d, the voting rights were not distributed along with the ownership: Buying stock would not get shareholders equal say in how the company is run. These companies came up with a new way of keeping the voting rights concentrated in the hand of the founders and key executives. Shareholders bought into the value, but not into the governance. This will turn out to be important.
“And just like that, we were in Era of Engagement Over Everything Else. This changes everything.”
(III)
STATE OF THE UNION: A QUICK LOOK AT THE POWER DISTRIBUTION TODAY
So where does that leave us now? Here’s a look at the current information landscape—and how it’s affecting democratic processes.
A few pieces of evidence, submitted in no particular order:
With the exception of TikTok, all consumer-facing Big Tech companies are based in the US: Most notably Google (owner of Google search, Youtube, Google Maps, GMail, and AI tool Gemini among many others) and Meta (owner of Facebook, Instagram, Whatsapp). Looking at AI companies (because it looks like there’s increasing overlap between AI and search/social, and I suspect personalized ads), also OpenAI (owners of ChatGPT, partially owned by Microsoft). So: “Tech power” is concentrated in the US. Which will become relevant in a moment.
But first, a side note: Google was found guilty of running a monopoly on certain ad tech markets and to have used its dominance in these markets to restrict competition and harm consumers. Meta is currently under trial to determine if they are or were a social media monopolist and engaged in harmful monopolistic behavior. Even if they should be able to lawyer their way out of a conviction, it’s obvious that Big Tech is extremely dominant and engaged in some pretty sketchy behavior.
Speaking of convictions, back to the previous point about tech being so concentrated in the US. Big Tech executives have opportunistically rolled over and exposed their belly after the recent US elections. They’ve been caving on any pressure from the US president and rolled back policies that they before swore were bedrocks of their convictions, including around content moderation, trust and safety, information integrity, DEI, and others. This was so easily possible, because — as established before — the founders and executives have near-total control over these companies. If voting rights and control were spread out over a wide range of shareholders, this would have at least led to debates.
Barring this moderating factor, these executives were rewarded with access to political power. The blurring lines between tech monopoly and state influence erode democratic checks and balances. When unelected, unaccountable CEOs act as government decision-makers, the democratic contract is diluted.
The most notable case of course being Elon Musk having been a fixture in the Oval Office and running a quite possibly illegal, but most definitively ideological purging campaign across the administration and walking off with laptops full of highly sensitive citizen data. I probably don’t need to explain why this is both highly irregular and pretty worrying. Within days, the global landscape of tech, and the calculus for which Big Tech platforms can or cannot be trusted and for what types of interactions has changed dramatically. Tiktok is not based in the US. It is, however, based in China, where the state has the right to access any user data. Either way, it can not exactly be considered a safe haven for user data.
All of these platforms now rank, structure, filter, amplify, target and disseminate their respective types of information (search results, social media posts, comments) algorithmically. How exactly these algorithms work is privileged information, so from the outside they are black boxes. We do know for sure, however, that the money is in the ads, so keeping users engaged is at least one of the top priorities. You keep users engaged by triggering intense emotion: Typical shortcuts to doing that is by favoring content that outrages, or reinforces belonging to an ingroup, or reinforces others being an adversarial outgroup. In other words, the financial incentives are aligned so that companies financially benefit from making their content algorithms prioritize populist and divisive content (as well as the other end of the spectrum: generic, TV-like passive-consumption content like cooking videos).
I will point to one exception where serving ads does not appear to be the focus because making money does not seem to matter much, and that is X: Musk bought Twitter (now X) for a ridiculous $44b at the time and it seems to have been a disastrous financial investment. However, considering how intensely political (and radicalized) he has become over the last few years and that he is the richest person who ever lived, this seems to have been a political play for him. I assume (but of course cannot know for sure) that he didn’t just want a platform for himself but mostly wanted to destroy the extremely lively and unusual discourse happening on Twitter. Which had lots of problems, of course, but the only problem he seemed to care about was that he perceived it as a woke platform and he didn’t like that.
What Twitter was more than anything was a highly unusual forum where different powerful fairly elite groups interacted in ways they do not anywhere else: journalists, VIPs, politicians, technologists all had discussions and mini-feuds and quips there. As fas as I’m aware, nothing like this has ever existed before, and nothing since. Personally, I loved it because it connected me with absolutely amazing people. Twitter enabled a relatively brief (2006-2022) phase of aggregating a bunch of communities that usually don’t mingle that publicly and to that degree, and that phase is now over. Because Musk bought and killed it, I believe on purpose and as a political project. And it earned him a spot in the Oval Office and with access to a lot of US government resources, so I guess it was successful? So not ads in this case, but rather an old school power move. But I digress.
The main takeaway here is this: Tech power is felt around the globe, but it is massively concentrated in the US. And US tech power has submitted to the new US government without a fight in exchange for short-term political gains. As far as I can tell from the outside, they did not fight for principles, their staff, nor their users.
(IV)
THE ERA OF ENGAGEMENT OVER EVERYTHING ELSE, EXPANDED
The situation may be dire, but it’s not hopeless. Here are the questions we need to consider. As mentioned before, optimizing for engagement changed everything.
To recap:
because there was infinite money to be made, Big Tech companies wanted to serve more personalized ads. To serve more and better personalized ads, you need:
– better tracking of users’ interests and behavior, so we got tracking infrastructure that follows users even outside the services they’re using
– more places to serve ads on, so people needed to be held on services for as long as possible
Social platforms, and frankly every type of scrolly-type content media, requires users to stay on the site/app/platform. That is how they make money. If you had ever wondered why the infinite scroll exists, this is the reason: When content “ended”, some users might have left. Others might have refreshed, but you can’t risk anyone leaving. You need to capture all those eyeballs!
How do you keep users engaged, and thereby on the platform, and thereby be able to keep serving them ads forever? By serving content they keep watching. Yes, part of the transition has been a migration first from text to images, then to video. Partially this is because now we can do this (there’s enough bandwidth and processing power on phones), partially because video is more engaging to the human brain.
So, we get new formats. Videos, nicely short, one seamlessly after the other: Perfectly optimized for quick dopamine hits. Also, algorithms that optimize for strong emotional reactions, as laid out before. A lot of emotion can be easily wrangled from populist, divisive, high-conflict type content. So, we get lots of strong opinions. Finally, the platforms track what you do online (because it indicates interest), in many cases even what you do elsewhere on the internet. Which is objectively weird and invasive, but stay with me. And they track how engaged a user is at any point.
Now this isn’t an exact science, but there are strong indicators like are they actively scrolling or otherwise interacting with a piece of content, like clicking, sharing, commenting? Is the phone upright in their hand or flat on the table? Whatever it takes, they try to track it and whenever it looks like attention is fading, they serve a slightly different mix of content with the intent of pulling your attention back onto the mix. Depending on a users personality or mood on any given day, this might lead to more cooking videos (soothing), fitness videos (aspirational), or more politicized (high-conflict). The latter is the most problematic one, as we’ll see. In short, the video watches you as much as you watch the video.
All of this is bad for our collective attention and attention spans, but this isn’t the only problem.
Social media platforms got so good at engaging users, and ad and search companies got so good at serving and selling targeted advertising, that two things happened: One, they became the dominant players for advertising. Two, they reshaped what we expect as content formats to mostly mean short-form video.
“The video watches you as much as you watch the video.”
By becoming the dominant players for advertising, they redirected money streams significantly: Not only does more money flow through them, a larger chunk of it remains with them. They capture a bigger chunk of the cake, and less gets channeled to media outlets. Some of it also goes to a new class, the people we currently call content creators and/or influencers. Who, footnote, are for the most part not journalists, which will become important in a bit. The important takeaway here is that as more ad money flows through these big tech companies, less money gets channeled to media, especially to journalism.
By reshaping — through their sheer success! — expectation about what format content should take, they kinda-sorta forced media outlets to adapt their formats, primarily towards short-form video. Facebook was pretty blatant about this for a while: They would simply tell media outlets that short video formats would be algorithmically ranked higher than text-based content, so newsrooms rushed to change things on over to short videos in order to preserve at least some income from their ever-shrinking slice of the ad-money cake. Switching to new formats in an already understaffed newsroom means more tradeoffs: There’s a real risk that important other formats or depth get lost because there simply aren’t enough hours in the day.
Then at some point Facebook decided that they’d rather keep all the traffic, and they didn’t want to promote news content anymore anyway, thank you very much. And just like that, journalism was worse off. A lot worse.
(V)
SO WHAT DOES THAT MEAN? HERE’S THE PROBLEM. PROBLEMS. WHATEVER.
Media, especially social media, now runs on the optimize-for-engagement logic, and used behavioral tracking to do so. This leads to massive second-order (systemic) issues.
The first second-order problem is that journalism is getting starved. There’s no nicer way to say this: We’re not currently providing the funding for journalism to happen at the scale we need it to happen. Media happens, yes. Social media happens, lot of it. But journalism, the news reporting type and the investigative type, are not happening anywhere near the scale we desperately need them. AI-generated search results will speed this up and amplify it even more as users have less reason to leave the search app, be it Google or ChatGPT or Perplexity.
Too much ad revenue is captured by tech and ad platforms, too little is passed on. What is passed on is not passed on in a way where it sustains the journalism ecosystem but instead influencers and either non-political content or the opposite, pretty radical political content.
In a desperate move, news publishers have been retreating behind paywalls to preserve a modicum of revenue. Which is understandable, but means that less — much, much less — of the general public gets to read their news. Public broadcasters are the exception, but they often are slashed or pressured by populist or rightwing governments these days, too. For example, in the US, funding for National Public Radio (NPR) and the Public Broadcasting Service (PBS) was just stopped.
The second second-order problem is that we create perfect conditions for disinformation. Where trustworthy, fact-checked news aren’t easily and freely available, we basically roll out a red carpet for disinformation and charlatans to fill the void. Which is exactly what has been happening.
The third second-order problem is that we create new attack vectors for weakening democracy, and democracy’s ability to defend itself. Of course it’s not just that the financials happen to lead to more disinformation: Populists from within have a lot to gain from a weakening of journalism, which is why they frame “traditional” media and journalism as “fake news” and try to discredit them. It’s a communications strategy.
This ecosystem is also under attack from the outside. It’s been well documented how much Russia engages in hybrid warfare against Western democracies including the US and Europe through disinformation campaigns and by using social media to stoke pre-existing conflicts. Anything that distracts, divides and weakens resolve in these countries weakens them relative to the attackers. I personally wouldn’t be surprised if the majority of the culture wars we see on absolutely every issue under the sun could be traced back to those external attacks, but who knows. No matter the origin, we now live through these culture wars, much to everyone’s detriment. (Except the detriment of populists and the attackers, that is.)
The fourth second-order problem is that ad tech companies built a massive infrastructure to track behavior online in order to search ads. And while it does serve ads, it is also the very infrastructure used to power all the rage-bait and divisive and radicalizing content we discussed above. So the infrastructure that was intended to serve better and more personalized ads really powers the attacks on our media environment and weakens democracy and it also fails at funding journalism. It’s a lose-lose, and in my view it has to go. Let’s kill it with fire.
The fifth and maybe most insidious second-order problem is that this type of engagement-optimizing turns media and social media sites into adversarial systems. Consider an old-school TV in the 1990s: It didn’t know how much attention you were paying, it just ran. Chances are that at some point it got too boring even without many other distractions, and your attention would wonder. You might even switch it off. Online media, especially social media, do not work like that at all. As mentioned earlier, these apps track constantly if you’re paying attention. And if they detect a drop in your attention, they actively change things up to regain it.
Apps watch you more intently than you watch them! This is not healthy. It’s completely insane and not OK at all. Especially for kids and teenagers, this to me seems highly problematic, but let’s face it, even as an adults I find it hard not to get sucked into the scroll.
As generative AI gets more powerful and engrained in our lives, we also should expect AI-generated content to flood our channels even more: Meta literally plans for chatbots to fill Facebook users’ feeds with content. As populists openly talk about “flooding the zone with shit” as a strategy, generative AI will make it even easier for them to attack our attention.
(VI)
SO, WHAT NOW?
So where does all that leave us? I think we’re currently in a valley of tears, so to speak. I believe that currently, our public information sphere is in pretty bad shape. That adds another layer of stress onto our democratic systems that are already under pressure — around the world, from populism and right wing extremism and other stressors. I also believe that things can and will get better.
We are facing monumental global challenges: Climate change, protecting rights and democracy… that’s a tall order!
Currently, our tools for addressing them are kind of broken. A big part of our problem is that our information, our systems for debating and for making sense of the world, to argue (constructively) and to inform ourselves, are not working. And as long as that is the case, we cannot move any closer to solving the big challenges. We need to fix this, and quickly, so that we can then move on to the bigger issues.
All these problems are large and intimidating. They are, however, all man-made, and I believe we can solve them. We can solve them by first and foremost focusing our attention. Starting with small things like walking away from fake conflicts others create to distract us — walking away from conflict is a thing we can all just, y’know, do! It’s a choice we can make.
There are other — many other — factors that contribute to our global challenges, but this part, the part about media, information, the web, these are the issues I have the best grasp on. This is where I feel I personally can make some small contributions to nudge things into the right directions. So this is where I’ll continue to focus my attention for the foreseeable future. Maybe you’ll join in, too?
Notes: I deliberately stayed away from deep-linking out from the essay to keep it as clean as possible. That said, a lot of my thinking is informed and shaped by smarter folks including Cory Doctorow, Zeynep Tufekci, Hank Green, Ryan Broderick, Ezra Klein, Ethan Zuckerman, danah boyd, Charlie Warzel, as well as classics like Marshall McLuhan and Neil Postman, and many others. Thank you!
In the spirit of learning out in the open, I’ve been exploring various pieces of this not just in my work, but also in my blog for a while. This is the first time I put them together into a coherent picture (I hope). So for a glimpse at the thoughts that led me here, I’m including a few of links to my own old blog posts: one, two, three, four, five, six, seven.
[^1]: Hank Green does a wonderful job laying this out, and I’m paraphrasing his work in this chapter. I cannot recommend watching his Vlogbrothers video enough.