There’s a German figure of speech, pest or cholera? It’s the equivalent to the English phrase between a rock and a hard place but isn’t this just so much richer? Anyway, you’ll see where this goes.
OpenAI, almost certainly the most prominent AI company in the world right now, is in shambles. Or maybe not! It’s hard to tell, really. They’re the Schroedinger’s Cat of AI companies. Within a few days they fired, then re-hired their CEO Sam Altman in what appears to be a power struggle between different ideological groups within the AI scene.
And while I usually don’t share internet hot takes, two very internet newsletters, Garbage Day and Today in Tabs just posted takes that made me chuckle and that might actually be on to something deeper, the growing cult-like philosophical rift between Effective Altruism (EA) and Effective Accelerationism (e/acc) – both of which appear hugely influential in Silicon Valley right now, especially among the AI folks.
From Garbage Day (from 22 Nov 2023, highlights mine):
This is where you ask if any of this matters. And the answer is, unfortunately, yes. Groups like Remilia and BRG are one half of an extremely stupid ideological battle tearing apart tearing apart Silicon Valley — and OpenAI, specifically — right now. And before last Friday, idea that the biggest names in tech could read too many blog posts and end up developing what are essentially two competing religions and could then be willing to blow up billion-dollar companies in defense of those religions was, honestly, too stupid to believe. And yet, here we are…
Which means it’s worth understanding what these two sides want. And it essentially comes down to speed. On one side is effective altruism, or EA, and on the other is effective accelerationism, or e/acc, which is mainly what BRG is promoting on TikTok right now.
The altruists, which includes folks like Elon Musk and Sam Bankman-Fried, believe that maximum human happiness is a math equation you can solve with money, which should be what steers technological innovation. While the accelerationists believe almost the inverse, that innovation matters more than human happiness and the internet can, and should, rewire how our brains work. Either way, both groups are obsessed with race science, want to replace democratic institutions with privately-owned automations — that they control — and are utterly convinced that technology and, specifically, the emergence of AI is a cataclysmic doomsday moment for humanity. The accelerationists just think it should happen immediately. Of course, as is the case with everything in Silicon Valley, all of this is predicated on the unwavering belief in its own importance. So it’s very possible that if we were to take the actually longtermist view of all of this, we’d actually end up looking back at this whole thing as a bunch a weird nerds fighting over Reddit threads.
From Today in Tabs (23 Nov 2023, highlights mine):
Imagine I awoke one morning from troubled dreams to find myself transformed in my bed into someone with an idea for how, if a global mega-corporation gave me tens of billions of dollars, and if I could collect a thousand of the smartest scientists, engineers, and coders who have ever lived, and if I had access to quantities of energy and computing power that would stretch the limits of what is technically imaginable in the next decade, I could build a machine that would relentlessly turn every atom of matter in the universe into paper clips. I feel like I would… simply not do that. Should I devote a huge amount of expense and effort to accomplish something objectively terrible? It’s not even a tough call for me, a person with what I flatter myself is a normal working brain.¹
But however implausible it is, imagine that instead of thinking “lol no?” someone had that idea and thought “lol, I better build this machine so that rather than paper clips, it will turn every atom of the universe into something positive for humanity, like ice cream.” This is incredibly stupid and abstrusely metaphysical, but also a pretty accurate overview of today’s business news. Yes, regrettably, I’m talking about OpenAI.|
(…)
Altman has had a career that any Silicon Valley exec would absolutely kill for, and he hasn’t accomplished a single worthwhile thing. He founded one failed company, ran a factory that bought predatory amounts of equity for virtually nothing from every Stanford dropout with an idea for software to replace something Mommy used to do, then founded a company to build a product that he himself believes could eventually destroy humanity. He managed to do such a bad job organizing that company that he nearly got it taken away from him by the guy who runs the Christian babies Q&A site and Joseph Gordon-Levitt’s wife before he was rescued by a nightmare blunt rotation of Microsoft, Marc Benioff’s Number One Boy, and Larry Fucking Summers. And the Quora guy even got to stick around!
Ultimately, as Ryan Broderick explained today, all of this nonsense is an epiphenomenon of a dispute between two offshoots of the LessWrong “rationalist movement” which have become AI-focused religions known as EA and e/acc. EAs (“effective altruists,” and yes these are the same nitwits who ran FTX into the ground) believe that AI is existentially dangerous and they must carefully build the infinite ice cream machine lest bad actors build the infinite paperclip machine instead, while e/acc (“effective accelerationists”) believe that who knows, maybe paperclips are cooler than everyone thinks and furthermore lmao, yolo. Both sides are deeply insane, and from a strict longtermism pov the best option is to abandon all of them on a remote island and see how they’re getting along in a thousand years or so.
Now it’s easy to dunk on the OpenAI folks as well as the EA and e/acc folks in general. Also, since they’re hugely powerful and by and large wealthy enough not to be the victims here, it seems fair game to dunk on them. But that’s not the point.
The point is that it’s helpful not to just think of OpenAI specifically and the prominent faces of EA and e/acc as just disconnected dots that exist in a vacuum. Rather, that there are guiding philosophies in play – strongly held (as far as we can tell) beliefs that large parts of an industry that itself is becoming extremely influential in the world. (I say in the world to mean it’s not a self-constrained industry but has real, meaningful impact societally, politically, economically.)
All of this is noteworthy because the absolute number of people involved here is comparatively small! A handful, or maybe a few dozen, investors and founders are all it takes right now to significantly influence the direction tech and Silicon Valley take: There’s an almost ridiculous degree of concentration of wealth and power among just a few people who struck gold early on and leveraged that gold into influence.
Think of Musk, Andreesen, Zuckerberg, Altman, Thiel: They’re influential, well-resourced, well-connected and increasingly political. They also appear increasingly untethered.
All of this is of course a false choice. Surely, a better way of doing AI is possible.
Anyway: EA, e/acc, very internet hot takes. Enjoy.