Blog

Category Error: Tracking Ads Are Not a Funding Mechanism

C

This article is part of 20in20, a series of 20 blog posts in 20 days to kick off the blogging year 2020. This is 20in20:01.

A note to put this in perspective: This blog post doesn’t pull any punches, and there will be legitimate exceptions to the broad strokes I’ll be painting here. But I believe that this category error is real and has disastrous negative effects. So let’s get right to it.

I’ve been thinking a lot about category errors: The error of thinking of an issue as being in a certain category of problems even though it is primarily in another. Meaning that all analysis and debate will by necessity miss the intended goals until we shift this debate into a more appropriate category.

I increasingly believe that it is a category error to think of online advertising as a means to fund the creating of content. It’s not that online advertising doesn’t fund the creation of content, but this is almost a side effect, and that function is dwarfed by the negative unintended consequences it enables.

When we discuss ads online, it’s usually in the framing of funding. Like this: Ads pay for free news. Or: We want to pay our writers, and ads is how we can do that.

To be clear, these are solid arguments. Because you want to keep news and other types of journalism as accessible to as many people as possible (and it doesn’t get more accessible than “free”). And you do want to pay writers, editors and all the others involved in creating media products.

However.

Those sentences make sense if we consider them in their original context (newspaper ads) as well as the original digital context (banner ads on websites).

There, the social contract was essentially this: I give my attention to a company’s advertisement, and that company gives the media outlet money. It wasn’t a terribly elegant trade in that there are still (at least) three parties involved, but it was pretty straightforward.

Fast forward a few years, and tracking of success of individual ads gets introduced, which happens in steps: First, how many times is this ad displayed to readers? Then, how many readers click it? Then, how many readers click it and then follow through with a purchase? There’s a little more to it, but that’s the basic outline of what was going on.

Fast forward a few years again, and now we have a very different picture. Now, ads place cookies on readers’ devices, and track not just how often they’re been displayed or clicked or how often they convert to a purchase.

Contemporary tracking ads and their various cookies also do things like these: Track if a reader moves the cursor over the ad. Tracks from which website a reader comes to the current website and where they head after. Track the whole journey of any user through the web. Track the software and hardware of any reader’s devices and create unique fingerprints to match with other profiles . Match movement through the web with social media profiles and activities and likes and shares and faves. Track movement of the reader in the physical world. Build profiles of readers combined from all of this and then sell those in aggregate or individually to third parties. Allow extremely granular targeting of advertisements to any reader, down to the level of an ad personalized for just one person. And so on.

This is nothing like the social contract laid out above, even though the language we use is still the same. Here, the implied social contract is more like this: I get to look a little at a website without paying money, and that website and its owner and everyone the owner chooses to make deals with gets to get an in-depth look at all my online behaviors and a significant chunk of my offline behaviors, too, all without a way for me to see what’s going on, or of opting out of it.

And that’s not just a mouthful, it’s also of course not the social contract anyone has signed up for. It’s completely opaque, and there’s no real alternative when you move through the web, unless you really know about operational security in a way that no normal person should ever have to just in order to read the news.

This micro-targeting is also at the core of what might (possibly, even though we won’t have reliable data on it until it would be too late, if confirmed) undermines and seriously threatens our political discourse. It allows anti-democratic actors of all stripes to spread disinformation (aka “to lie”) without oversight and it’s been shown to be a driver of radicalization by matching supply with demand at a whole new scale.

Even if you don’t want to go all the way to this doomsday scenario, the behavior tracking and nudging that is supposed to streamline readers into just buying more stuff all the time (you probably know how it feels to be chased around the web by an ad for a thing you looked up online once) without a reasonable chance to stop it is, at best, nasty. At worst, illegal under at least GDPR, as a recent study demonstrated. It also creates a surprising — and entirely avoidable! — carbon footprint.

So, to sum up, negative externalities of tracking ads include:

  • micro-targeting used to undermine the democratic process through disinformation and other means;
  • breaches of privacy and data rights;
  • manipulation of user behavior, and negatively impacting user agency;
  • and an insane carbon footprint.

Of course, the major tracking players benefit a great deal financially: Facebook, who a point of not fact-checking any political ads, i.e. willingly embrace those anti-democratic activities. Google, who are one of the biggest provider of online ad tracking and also own the biggest browser and the biggest mobile phone operating system, i.e. they collect data in more places than anyone else. And all the data brokers, the shadowiest of all shadow industries.

Let me be clear, none of this is acceptable, and it is beyond me that at least parts of it are even legal.

So, where does that lead us?

I argue we need to stop talking about tracking ads as if they were part of our social contract for access to journalism. Instead, we need to name and frame this in the right category:

Tracking ads are not a funding method for online content. Tracking ads are the infrastructure for surveillance & manipulation, and a massive attack vector for undermining society and its institutions.

Funding of online content is a small side-effect. And I’d argue that while we need to fund that content, we can’t keep doing it this way no matter what. Give me dumb (i.e. privacy friendly, non-tracking) ads any day to pay for content. Or, if it helps keep the content free (both financially as well as free of tracking) for others then I think we should also consider paying for more news if we’re in any financial position to do so.

(What we shouldn’t do is just pay for our own privacy and let the rest still be spied on, that’s not really a desirable option. But even that doesn’t currently exist: If you pay for a subscription you’ll still be tracked just like everyone else, only with some other values in your profile, like “has demonstrated willingness to spend money on online content”.)

So, let’s recognize this category error for what it is. We should never repeat these statement again that ads pay for content; they do not. (Digital ad spend goes up and up but over the last 15 years or so newspaper revenue through digital ads have stayed pretty flat, and print collapsed.) Ads online today are almost completely tracking ads, and those are just surveillance infrastructure, period. 

It’s surveillance with a lot of negative impact and a tiny bit of positive impact. That’s not good enough. So let’s start from there, and build on that, and figure out better models.

Monthnotes for December 2019

M

Like I wrote in the last monthnotes, November to January are blocked out for research and writing. That said, there was a bit of other stuff going on outside heads-down writing, most notably the annual ThingsCon conference.

I also wrapped up my Edgeryders fellowship. I’m grateful for the opportunity to pursue independent research into how we can make smart cities work better for citizens.

ONGOING WORK

I’ve submitted the final pieces of writing for a Brussels-based foundation. The final report should be out soon. This is roughly in the area of European digital agenda & smart city policy.

With a Berlin-based think tank, a research project is in the phase of final write-up of results and conclusions. This will likely take us well into January, then on to collect some more feedback on the final drafts. More updates when I have them. This is in the area of responsible AI development.

A more recent project around impact of smart cities on labor rights has kicked off in December. Lots of research and writing to do there well into January.

EVENTS

We held the annual ThingsCon conference in Rotterdam. There we also launched our report on the State of Responsible IoT (RIoT). The keynotes are available online, too. This event is one of my dearest; it’s a fantastic community. Thanks so much to our local team who makes this possible year after year.

WRITING & MEDIA

I’m mostly focused on writing my contributions to the research reports mentioned above. In the meantime, I was happy that Bruce Sterling mentioned the RIoT report on his blog, and Coda Story interviewed me for their list of Authoritarian Tech Trends 2020.

WHAT’S NEXT?

Lots of research and writing this month and next, so that means heads-down focus time for a little bit, splitting my time between writing and meetings to finalize those reports.

Edgeryders: My fellowship is a wrap

E

Earlier this year, Nadia E. kindly invited me to join Edgeryders (ER) as a fellow to do independent research as part of their Internet of Humans program. From June to December 2019 I was an ER fellow and had the opportunity to work with the lovely John Coate & Nadia, and the fantastic team and community there at ER.

This fellowship gave me that little extra wriggle room and a mandate to do independent research into smart cities and policy — read: how to approach smart city policy so that it works better for citizens. This allowed me to do a lot of extra reading, writing (for example, about smart city governance and a smart city model founded on restraint) and speaking, and it informed my work with foundations, think tanks, and policy makers during that time, too.

On the ER platform, there’s a very active community of people who’re willing to invest time and energy in debate. It allowed me to gather a bunch of valuable feedback on early ideas and thoughts.

As part of my fellowship, I also had the opportunity to do a number of interviews. John interviewed me to kick things off on the Edgeryders platform, and I interviewed a few smart folks like Jon Rogers, Ester Fritsch, Marcel Schouwenaar and Michelle Thorne (disclosure: my partner), all of whom do interesting and highly relevant work around responsible emerging tech: In many ways, their work helps me frame my own thinking, and the way it’s adjacent to my work helps me find the boundaries of what I focus on. If this list seems like it’s a list of long-time collaborators, that’s no coincidence but by design: Part of how ER works is by integrating existing networks and amplifying them. So fellows are encouraged to bring in their existing networks.

Some of these interviews are online already, as are some reflections on them:

The others will be cleaned up and online soon.

My fellowship technically ended a couple of days ago, but I’m planning to stay part of the ER community. Huge thanks to the whole community there, to the team, and especially to Nadia and John.

Thanks and Happy Holidays: That was 2019

T

This is end-of-year post #12 (all prior ones here).

What happened in 2019?

It was another intense year, but nothing compared to 2018. Just the new reality of having a kid combined with the old reality of two partners working in demanding and interesting jobs: There’s bound to be some friction, and so there is, and everyone in that boat knows how that can feel. That said, we’re lucky and privileged and so there’s really no reason to complain. If the world wasn’t literally burning right now there wouldn’t be any reason to worry.

The theme for 2019

Last year I wrote:

the theme was first and foremost impact. Impact through large partners, through policy work, through investments into research.

This essentially has stayed true for the third year in a row. I spent more time still on policy-related work — in fact I’d say my work has continued a pretty sharp turn upstream towards policy.

The space of emerging tech, responsible tech, and public policy has been maturing a lot, and those are all areas I’ve been working in for a long time. To see these three previously separate circles move closer together into a neat Venn diagram, so to speak, has been amazing. And it’s also given me a point of leverage for putting my expertise and experiencing to work, which I’m grateful for.

Friends & Family

A few close friends got married, some babies were born: In terms of friends and family it’s been a pretty good year. I’m ever grateful for the support of a handful of very close friends who’re there whenever I needed them; thank you, you know who you are.
Now if only there was a little more time to spend with friends and family. Making that extra time goes straight on the priority list for 2020.

Travel

For a few years now, I’ve been trying to cut down on travel, specifically on flights. It’s been working so-so: As best as I can figure out from Tripit and my calendar, I’ve gone on 17 trips for a total of 85 days of travel, which is about par for the course. Around 20 cities in 8 countries. It was still 20 flights. However — small victories! — only about 30,000 km total. Which sounds… a lot less? The year before it was more like 90,000 km. Either my accounting is way off, or it was the fact that I didn’t go to the West Coast a few times this year. Either way, I’ll take it.

(Not just 2019)

Speaking & Media

Punditry time! There was a bit of that. The website has lists of talks and media mentions; the top-level stats are: 12 talks & panels (including the occasional chairing/hosting) and 16 media mentions, contributions, or profiles about me. An interesting mix of publications, too, including Fast Company, a WEF publication, CHI, and Tagesspiegel.

What’s been interesting for me is that a lot of my talks and panels were at policy events. So that’s a bit of a new world for me, in a sense.

Health

Been trying to get back to a more healthy routing of regular workouts, with mixed success. Tried my hand at CrossFit, which I’ll revisit eventually but have shelved for now, and Pilates, which seems pretty promising to ease back into it. Otherwise all good. Especially, we’re back to eating healthier after the hectic & chaos that was 2018.

Work

Lots of reading and writing, lots of working at the intersection with the public policy work. I’ve been working with multiple foundations and think tanks on issues surrounding smart cities, AI, governance, tech policy. Some of this happens at the German level, much at the European or global level.

I’ve started my role as an industry supervisor in the OpenDoTT PhD program for responsible tech, and was a fellow at Edgeryders, where my research focused on smart city governance. Also, I went back on the jury for Prototype Fund.

ThingsCon is still going strong, we just had our 6th anniversary. As is my semi-personal newsletter Connection Problem, which I’ve been enjoying writing a lot.

Books read

29 books on my list, including some contemporary sci-fi (Kim Stanley Robinson, Eliot Pepper, Tim Maughan) but also classics like Vernor Vinge. Some fiction, like Ted Chiang’s Exhalations and John Hodgman’s Medallion Status. Some more random stuff like So Many Books by Gabriel Zaid, Lost Japan by Alex Kerr, A Pattern Language, Fewer better things by Glenn Adamson and some Dalai Lama; How to do Nothing by Jenny Odell, essay collections like McSweeny’s The End of Trust and — a bit surprising to me! — I found myself ripping through the full 8 book arch of The Expanse. Overall a quiet enjoyable mix.

Also, still lots of unfinished ones in the “ongoing” folder that might or might not ever be finished.

Firsts & some things I learned along the way

Firsts: Indoor Skydiving. Waterball (Water zorb?). Learned some Ukulele basics. Baked a bread. First time in Valencia. Took a boat to travel for business. Reconciled an old awkwardness. Applied for a job I was genuinely interested in. Looked at 20,000 year old cave paintings. Rode in two autonomous buses.

Learned: Prioritizing a lot more harshly. Identifying opportunities for leverage/impact better (I think).

So what’s next?

2020 will start with wrapping up a number of ongoing projects, and a new collaboration which feels pretty exciting. I look forward to continuing my work at the intersection of responsible tech & public policy, and finding the most effective points of leverage to really make sure my expertise can be meaningfully applied.

I also still would be pretty interested in helping a foundation partner set up an impact investment program at that intersection of emerging tech & public policy. Any leads would be highly welcome!

Otherwise it might be time again to spend a month or two working from abroad, but nothing’s set in stone yet.

Also, I’m always up for discussing interesting new projects. If you’re pondering one, get in touch.

For now, I hope you get to relax and enjoy the holidays.

Just enough City

J

In this piece, I’m advocating for a Smart City model based on restraint, and focused first and foremost on citizen needs and rights.

A little while ago, the ever-brilliant and eloquent Rachel Coldicutt wrote a piece on the role of public service internet, and why it should be a model of restraint. It’s titled Just enough Internet, and it resonated deeply with me. It was her article that inspired not just this piece’s title but also its theme: Thank you, Rachel!

Rachel argues that public service internet (broadcasters, government services) shouldn’t compete with commercial competitors by commercial metrics, but rather use approaches better suited to their mandate: Not engagement and more data, but providing the important basics while collecting as little as possible. (This summary doesn’t do Rachel’s text justice, she makes more, and more nuanced points there, so please read her piece, it’s time well spent.)

I’ll argue that Smart Cities, too, should use an approach better suited to their mandate—an approach based on (data) restraint, and on citizens’ needs & rights.

This restraint and reframing is important because it prevents mission creep; it also alleviates the carbon footprint of all those services.

Enter the Smart City

Wherever we look on the globe, we see so-called Smart City projects popping up. Some are incremental, and add just some sensors. Others are blank slate, building whole connected cities or neighborhoods from scratch. What they have in commons is that they mostly are built around a logic of data-driven management and optimization. You can’t manage what you can’t measure, management consultant Peter Drucker famously said, and so Smart Cities tend to measure… everything. Or so they try.

Of course, sensors only measure so many things, like physical movement (of people, or goods, or vehicles) through space, or the consumption and creation of energy. But thriving urban life is made up of many more things, and many of those cannot be measured as easily: Try measuring opportunity or intention or quality of life, and most Smart City management dashboards will throw an error: File not found.

The narrative of the Smart City is based fundamentally that of optimizing a machine to run as efficiently as possible. It’s neoliberal market thinking in its purest form. (Greenfield and Townsend and Morozov and many other Smart City critics have made those points much more eloquently before.) But that doesn’t reflect urban life. The human side of it is missing, a glaring hole right in the center of that particular vision.

Instead of putting citizens in that spot in the center, the “traditional” Smart City model aims to build better (meaning: more efficient, lower cost) services to citizens by collecting, collating, analyzing data. It’s the logic of global supply chains and predictive maintenance and telecommunications networks and data analytics applied to the public space. (It’s no coincidence of the large tech vendors in that space come from either one of those backgrounds.)

The city, however, is no machine to be run at maximum efficiency. It’s a messy agora, with competing and often conflicting interests, and it needs slack in the system: Slack and friction all increase resilience in the face of larger challenges, as do empowered citizens and municipal administrations. The last thing any city needs is to be fully algorithmically managed at maximum efficiency just to come to a grinding halt when — not if! — the first technical glitch happens, or some company ceases their business.

Most importantly, I’m convinced that depending on context, collecting data in public space can be a fundamental risk to a free society—and that it’s made even worse if the data collection regime is outside of the public’s control.

The option of anonymity plays a crucial role for the freedom of assembly, of organizing, of expressing thoughts and political speech. If sensitive data is collected in public space (even if it’s not necessarily personably identifiable information!) then the trust in the collecting entity needs to be absolute. But as we know from political science, the good king is just another straw man, and that given the circumstance even the best government can turn bad quickly. History has taught us the crucial importance of checks & balances, and of data avoidance.

We need a Smart City model of restraint

Discussing publicly owned media, Rachel argues:

It’s time to renegotiate the tacit agreement between the people, the market and the state to take account of the ways that data and technology have nudged behaviours and norms and changed the underlying infrastructure of everyday life.

This holds true for the (Smart) City, too: The tacit agreement between the people, the market and the state is that, roughly stated, the government provides essential services to its citizens, often with the help of the market, and with the citizens’ interest at the core. However, when we see technology companies lobby governments to green-light data-collecting pilot projects with little accountability in public space, that tacit agreement is violated. Not the citizens’ interests but those multinationals’ business models move into the center of these considerations.

There is no opt-out in public space. So when collecting meaningful consent to the collection of data about citizens is hard or impossible, that data must not be collected, period. Surveillance in public space is more often detrimental to free societies than not. You know this! We all know this!

Less data collected, and more options of anonymity in public space, make for a more resilient public sphere. And what data is collected should be made available to the public at little or no cost, and to commercial interests only within a framework of ethical use (and probably for a fee).

What are better metrics for living in a (Smart) City?

In order to get to better Smart Cities, we need to think about better, more complete metrics than efficiency & cost savings, and we need to determine those (and all other big decisions about public space) through a strong commitment to participation: From external experts to citizens panels to digital participation platforms, there are many tools at our disposal to make better, more democratically legitimized decisions.

In that sense I cannot offer a final set of metrics to use. However, I can offer some potential starting points for a debate. I believe that every Smart City projects should be evaluated against the following aspects:

  • Would this substantially improve sustainability as laid out in the UN’s Sustainable Development Goals (SGD) framework?
  • Is meaningful participation built into the process at every step from framing to early feedback to planning to governance? Are the implications clear, and laid out in an accessible, non-jargony way?
  • Are there safeguards in place to prevent things from getting worse than before if something doesn’t work as planed?
  • Will it solve a real issue and improve the life of citizens? If in doubt, cut it out.
  • Will participation, accountability, resilience, trust and security (P.A.R.T.S.) all improve through this project?

Obviously those can only be starting points.

The point I’m making is this: In the Smart City, less is more.

City administrations should optimize for thriving urban live and democracy; for citizens and digital rights — which also happen to be human rights; for resilience and opportunity rather than efficiency. That way we can create a canvas to be painted by citizens, administration and — yes! — the market, too.

We can only manage what we can measure? Not necessarily. Neither the population or the urban organism need to be managed; just given a robust framework to thrive within. We don’t always need real time data for every decision — we can also make good decision based on values and trust in democratic processes, and by giving a voice to all impacted communities. We have a vast body of knowledge from decades of research around urban planning and sociology, and many other areas: Often enough we know the best decisions and it’s only politics that keeps us from enacting them.

We can change that, and build the best public space we know to build. Our cities will be better off for it.

About the author

Just for completeness’ sake so you can see where I’m coming from, I’m basing this on years of working at least occasionally on Smart City projects. My thinking is informed by work around emerging tech and its impact on society, and a strong focus on responsible technology that puts people first. Among other things I’ve co-founded ThingsCon, a non-profit community that promotes responsible tech, and led the development of the Trustable Technology Mark. I was a Mozilla Fellow in 2018-19 and am an Edgeryders Fellow in 2019-20. You can find my bio here.

Monthnotes for November 2019

M

November to January have essentially been blocked out for research and writing.

ONGOING WORK

My work with a Brussels-based foundation is in the final stages of editing. I expect the final report to be published within the month. Expect lots of smart city / digital agenda thinking there.

With a Berlin-based think tank, we’ve had another workshop on Europe’s capacity for developing ethical/responsible AI. This brings the research phase slowly to its end and now it’s time to synthesize and write up the findings. Probably to be published in January, or February at the latest.

A new project is ramping up with a global labor group to explore the intersection of smart cities and labor rights.

EVENTS

I was kindly invited to participate in Forum Offene Stadt, an event jointly hosted by Körber Stiftung and Open Knowledge Foundation. The event brings together civil society, government & administration, and the private sector. It’s good to see these gatherings as there’s still nowhere near enough exchange at those intersections, especially where topics around the impact of emerging technologies are discussed.

WRITING

Most of my writing these next couple of months goes into reports. But occasionally I whip up a blog post like The Tragedy of Future Commons, and also I think out loud over in my weekly newsletter, Connection Problem, as well as of course on Twitter.

MEDIA

Kompakt Magazin (by IG BCE) did a longer interview with me to talk about technology and how we must ensure it doesn’t discriminate against people, and shouldn’t be treated as if it’s pre-determined. The interview is in German: Technologie darf Menschen nicht diskriminieren (online, e-paper).

WHAT’S NEXT?

Lots of research and writing this month and next. But I’ll be more than happy to take a quick break from it all to head on over to Rotterdam (12/13 Dec) for ThingsCon, my all time favorite community event. Learn more about ThingsCon here and join us!

The Tragedy of Future Commons

T

I can’t help but thinking that so many of today’s debates – from climate change to smart city governance and AI ethics – are so much more connected than we give them credit for. I might be projecting, but in my mind they’re just variations of one simple theme:

Do we narrow or broaden the future options space? In others words, will we leave the next generation, the public sector, or the other people around us more options or less options? Do we give them agency or take it away? And how can it ever be ok to act in a way that takes away future generations’ options? That strips governments of their chances to deliver services to their citizens?

It’s essentially the Tragedy of the Commons as applied to the time axis: The Tragedy of Future Commons. And we can choose very deliberately to strengthen the commons (now and for the future), to strengthen future generations in the face of climate change (where we might have hit another tipping point), to strengthen city governments in their ability to govern and deliver services by not hollowing them out, etc. etc. etc..

What actions that requires of us depends heavily on context of course: AI to be made with more participation and civil society involved so as to mitigate risks. Smart cities to prioritize public ownership and accountability so the city doesn’t lose its influence to the private sector. Climate change to be at the top of all our priority lists in order to give our future selves and future generations more and better options to shape their world and thrive in it.

Too often we’re stuck in debates that are based, essentially, in yesterday’s world. We need to realize the situation we’re in so as to avoid false choices. It’s not “climate or business”, it’s “climate or no business”. It’s not “climate or civil rights”, but “climate or no civil rights”. Radical changes are coming our way, and I’d rather shape them with intention and some buffer to spare rather than see them imposed on us like gravity imposed on Newton’s fabled apple.

So let’s aim for the opposite of the Tragedy of the Commons, whatever that might be called. The Thriving of the Commons?

And if you need a framework that’s decidedly not made for this purpose but has been holding up nicely for me, look to the Vision for a Shared Digital Europe (SDE) for inspiration. It lays out 4 pillars that I find pretty appealing: Cultivate the Commons; Decentralize Infrastructure; Enable Self-Determination; Empower Public Institutions. The authors drafted it with the EU’s digital agenda in mind (I was a very minor contributor, joining at a later stage). But I think it can apply meaningfully to smart cities just as much as it does to AI development and climate change and other areas. (Feel free to hit up the team to see how they might apply to your context, or reach out to me and I’ll be happy to put you in touch.) Those are good principles!

Note: This piece is cross-posted from my weekly newsletter Connection Problem, to which you can sign up here.