Thoughts on Tracking and Targeting

Over on Threads I had an exchange about targeted ads: are they better than generic ads? Is ad tracking really bad?

I see some real, relevant and big implications of behavioral tracking and targeted ads that are hugely problematic that may not be obvious. So I’d like to lay them out for you to consider.

First, a few premises:

  • Are personalized ads more interesting than non-personalized ads? Often that certainly is the case. But I argue the ad itself isn’t the right level for analysis, my arguments are more systemic. I’ll explain below, but that’s important to keep in mind.
  • I subscribe to the political science school of thought that says that safeguards should, where possible and appropriate, be built into the systemic level as opposed to rely on everyone behaving their best at all times. To give an example, I for giving everyone health insurance rather than making it a personal choice, because public health systems don’t work as well if too many people fall through the cracks. I believe requiring drivers to have functioning blinkers and breaks makes sense because not having them puts others at risk.
  • In other words, and a little zoomed out, I believe that regulation is a key function of government: By setting a framework and installing guard rails against possible abuse, you protect the freedom and rights of citizens (as well as non-citizens inside your borders).

Now that that’s out of the way, let’s dive in.

I believe that the machinery used to track behavior online and to target personalized ads is directly connected to real, concrete harms and risks. Ad tracking & targeting is not a neutral technology.

That said, I acknowledge that these risks and harms take different forms, and are often one out of several factors at play. Removing ad tracking & targeting would not single-handedly solve the issues I’m about to bring up, but is one important factor.

So what are these risks and harms, you might ask. Let me go through a few, in more or less random order.

Ad tracking tied with engagement-based business models incentivize bad behavior online.

The vaguest, if maybe most societally relevant argument on this list. The current dominant business model around advertising is based on high engagement: The more engagement (clicks, shares, comments, etc.), the more impressions are generated. Each impression is one more spot of advertising inventory. In other words, the more people interact with a bit of content, the more ads can be served around this piece of content, and the more people are exposed to ads. That’s great for the platforms and data brokers. It’s also the reason we see so much outrage on social media: This is quite literally where platforms make their money. It doesn’t even matter much where exactly a platform might draw a line, there is going to be most engagement with content that gets closest to crossing the line and getting removed. Think I’m just playing armchair philosopher? Think again – Facebook has recognized this is as a big enough problem to warrant a dedicated policy (“demote borderline content”) around this:

In a 5000-word letter by Mark Zuckerberg published today, he explained how there’s a “basic incentive problem” that “when left unchecked, people will engage disproportionately with more sensationalist and provocative content. Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content.”

So, ok, bad behavior. What else?

Tracking ads play a significant role in misinformation campaigns.

Misinformation campaigns and (to a lesser degree) political micro-targeting weaken public opinion, political processes and in consequence, democracies. How bad is this really?

Here I’d like to quote Alexandra Alaphilippe of the EU DisinfoLab who builds on the work of Camille François’ well-respected ABC framework (Actors, Behaviors, Content: A Disinformation ABC). Highlights are mine:

The role and importance of distribution has grown in recent years, as recommendation systems and paid advertising have promoted conspiracy theories and falsehoods on YouTube, fed filter bubbles based on racial biases on TikTok, and encouraged emotional changes in Facebook users.
Disinformation actors have benefitted from abusing these vulnerabilities to gain online visibility and financial sustenance. Research from the EU DisinfoLab found that Facebook’s recommendation algorithm, for example, promoted pages posting disinformation authored by a French white supremacist group, allowing the group to boost its number of followers to nearly a half million.
Online political ads in electoral campaigns play a similar role, allowing voters to be directly microtargeted with disinformation. In one instance, what appears to have been a Ukrainian individual with ties to Russia impersonated prominent French politicians to target French audiences on Facebook with disinformation amplified by advertisements. The example clearly demonstrates that foreign stakeholders can exploit paid advertising to significantly extend the reach and influence of their disinformation campaigns.
My takeaway here is that influence and disinformation campaigns increasingly focus on content distribution rather than creation, and that algorithmic recommender systems play an increasingly important role for their work. These recommender systems are, in turn, powered by behavioral tracking. They use the infrastructure that is intended to serve tracking ads because — at least from a technical point of view — misinformation is indeed indistinguishable from other ads: Just another piece of content delivered to exactly the right person at exactly the right time.

Some more thoughts and pointers

These points above are two concrete harms we see today. I’d argue that it’s safe to assume that all of this is a real contributor to the rise of populism and, in consequence, a contributing factor to the demise of democracy, period. I’d also like to point you to this older blog post where I look at some research data about the impact of recommender algorithms on social media and news sites that focus on eliciting strong emotions because… engagement, and how that really took off the year Facebook started to lean into algorithmic content recommendations based on… you know where I’m going with this… behavioral tracking.

A big problem in even discussing these issues is that we still use the same language we used to describe advertising in TV or newspapers: A simple social contract in which we changed some share of our attention for cheaper access to news and entertainment. But this hasn’t been true for a long time. We don’t trade just attention anymore. Instead, our behavior online — and not just within one site or app! — is monitored heavily pretty much at all times. There’s no meaningful opting out of this for anyone.

Which leads me to believe that ad tracking and targeting in the way it is commonplace today should not even be considered a legitimate funding mechanism: I think we might be committing a category error. There is a version of ad targeting that I would consider entirely harmless and legit: Not personalized, tied to the content you’re looking at, without any surveillance attached. But this isn’t what the web looks like today. And I think that’s a big problem.

There is also a simpler, more principles-based argument here: You shouldn’t put infrastructure in place to enable large-scale problematic behavior unless its benefits far outweigh its risks. Should we have a global information ecosystem (the internet), even though there will be problematic behavior? Yes, because it has incredibly benefits. Should we have an elaborate surveillance system that tracks our behaviors across the web that nobody can opt out of, and let that infrastructure replace previous funding mechanisms for news and entertainment? Hell no.

Leave a Reply