Is it time to re-start building the internet?

Note: This is cross-posted from my newsletter, to which you can subscribe here.


Here are some thoughts that have been kicking around my head for some time. They’re most definitively related, but I’m not 100% confident about the conclusions just yet. So I figured I’d share them with you and would very much enjoy your thoughts. Just hit reply!


To set the stage: As you may or may not know, I believe that online behavioral tracking is at the heart of many societal issues, from platform dominance to disinformation and election interference to the undermination of trust in democratic institutions to the crisis of journalism funding, just to name a few. I also believe that what today is called the “advertisement based” funding model has very little, if anything, to do with what those terms meant in the early days of the internet. So that’s where I’m coming from.

At the moment — and really for the last few years — we’ve been constantly hearing discussions about trying to make sure that the web doesn’t break. Near-monopolistic platforms, aggressive recommender algorithms, toxic social media debates, you name it.


But — thought experiment time! — what if the internet is not at risk of breaking, or about to break, or in the process of breaking? What if the internet is already completely and utterly broken? What if this isn’t the time to save, but the time to rebuild?


One, a fundamental thesis that the internet was built around is basically that things are free and ad-supported. This has been quietly and gradually been replaced by a new thesis, namely that every bit of our attention is tracked and redirected to faciliate commercial transactions. Both theses use very similar language (“you get free access to media and you will see ads”) but they are fundamentally different. They have very little in common besides the fact that there is an ad visible at the very last step. Everything before that is completely and utterly different. The first model kinda-sorta worked, for a while, for many (but not all). The other model broke the web.

Two, I’d argue that openness isn’t gone from the internet. But the internet/web has been split up into a profoundly open infrastructure level, on top of which profoundly closed systems have been built. Some of them are closed in a traditional proprietary model. Others are technically a little bit open but for all intents and purposes still closed due to network lock-in or other mechanism. What that means (I think?) is that from a user standpoint, the internet isn’t open at all anymore. But from a rebuilding standpoint, it’s actually quite open.


Since generative AI has exploded in popularity over the last year or so, it has been dominating the tech discourse in a way I hadn’t experienced in quite some time, if ever. I have very complicated feelings about generative AI, but most of them appear to be so common I don’t want to repeat them here. Instead, a few thoughts I’m trying to make sense of. To give credit where credit is due, Nilay Patel (of The Verge) has been on a roll with some excellent conversations with Hank Green and with Ezra Klein that sparked a bunch of this.

One, in the Marshall McLuhan sense, if the medium is the message, what is the message of generative AI? I think there are quite a few possible answers that point to different directions. Let’s try on a few:

  • Hypothesis: Generative AI fills the gap left behind if we removed the silly busywork often attributed to the bureaucracies of larger organizations.
  • Hypothesis: Generative AI is essentially middle management cosplay (“you deserve not to do the grunt work yourself, you too can delegate”).
  • Hypothesis: Generative AI does not (cannot?) evoke emotions, it is instead a deeply post-rationalized thing of feeling obliged to feel awe at technological progress.

Two, I genuinely think that there are perfectly fine use cases for generative AI. I think there are — or rather, will be — some real productivity use cases. More obviously, though, genAI can be useful for low-stakes uses: illustrations for invitations or blog posts or other things where you likely wouldn’t spend any money or time on imagery. Some text improvements on social media posts. Summaries of documents that aren’t really important. Or as someone pointed out, illustrating your gaming group’s fantasy maps to make it more fun. Anything hobbyist or low-stakes. This is where genAI can be useful now, and maybe more useful soon.
That said, I think that for just about anything else, we’re barking up the wrong tree. If you’ve tried to have AI summarize, or worse, write something in an area of your expertise, I bet you noticed how it just does worse. It doesn’t matter if factual errors (the famous hallucinations) might be reduced a bit over time, if it’s important and your area of expertise one is one too many; 5 in a text might easily be a disaster. I don’t believe that we can solve these things through incremental improvements: As long as we’re talking generative AI build inside the current school of thought, these hallucinations are part of the feature set, not bugs that can be stamped out. (I might be wrong here; feel free to weigh in!)

Three, I think it’s an inherent problem that generative AI in its current model is a Software-as-a-Service model, and as such it is subject to constant change. Generative AI is infrastructure that constantly changes. That, plus the way that genAI inherently produces different, somewhat unpredictable outcomes, makes it really hard to incorporating it into workflows in a way that really reduces the amount of time or attention you need to spend. It just shifts attention from the creative stage to the quality management stage. Which to me, personally, is deeply unattractive.


This last point takes us off on a slightly different tangent: I believe that AI agents (AI that can act on users’ behalf) are obviously coming. And just like the rest of the things we’ve talked about above, there’s two ends of the spectrum of the thinkable, right?
On one hand, they might give us powerful tools to take annoying tasks off our hands. On the other, they might just lead to a type of AI agent arms race in which one agent tries to counter-act the other. As a simple example — one of the classics — think travel bookings. User A instructs their AI agent to find a train and hotel using certain parameters as cheaply as possible. User B does the same, but uses a different AI agent. The travel sites have their own set of AI agents to optimize their own margins. You have two user agents battling it out with one another over a scarce resource (the cheapest options within the desired options space) and with the supplier’s agent that tries to optimize for a different set of outcomes.
I could imagine a future where these things work, and work well, and it’s all nice. But frankly, very little in the way of recent tech history lets me believe in this model. Instead, I could much more vividly imagine a slightly enshittified version in which we see aggressively commercialized version of these AI agents which aim to be extremely transactional. In this case, I’d expect there to emerge a pretty classist situation in which the free, mass market version is shitty and essentially paid for by (and in service to) big corporations and expensive, maybe custom tailored elite versions that actually are in service to its high-spending clients. I’m using pretty exaggerated, cliché-ridden, almost comical language here on purpose to illustrate the point that the incentives for those agents that are made to scale — and hence have to be cheaply if not freely available — would be completely off. They would be incentivized to nudge you to those offers paid for by, well, by whoever is paying for the placements. It would be like Google’s old “I’m feeling lucky” button but it would jump straight to a paid-for transaction. It would be the worst of a tracking ad combined with the worst of frictionless e-commerce.


Now, taken together, this paints a bit of a dark picture, or at least I see how it might seem that way. But honestly, and I really mean that, I think it’s the opposite. The internet has had a good run for the last 20-30 years That’s a full human generation! For a new technology, that’s huge! Maybe we’ve reached the end of one chapter and the beginning of a new one.

Maybe that’s decentralized in the direction of the less-sleazy parts of the Web 3 world.
Maybe it’s as simple as banning behavioral tracking and then rebuilding from there.
Maybe it’s something else entirely.

Whatever it is, building new things with the knowledge and experience gained from previous successes and failures is exciting and offers great opportunities. It seems like maybe it would help letting go of the notion that there’s anything to defend that is left over from Web 1 and Web 2. Maybe it’s ok to just start building something new, something that’s not dependent on legacy platforms. Something that while it will undoubtedly have and bring problems on its own, at least they would be new and more interesting challenges. Something better, more healthy.

Anyway — some thoughts to mull over.