AI has an alignment issue, but it’s not what you think.
Alignment is famously tricky with AI, and one of the things most talked about among AI critics — especially among those critics who are also heavily involved with creating AI. In that context, it refers to getting AI models to not just solve a task, but to solve it in a way that is consistent with your (the company’s) values, like for example basic safety for humans. (IBM has an accessible definition..) To use an extreme and blunt example, if an AI were tasked to stop global warming it might suggest to simply remove all the humans; that might get the job done (solve the task) but not in a way that is aligned with the intent (solve climate change while preserving human life).
Whenever money runs low, tech gets us into trouble
As I see it, there’s a much more obvious alignment issue with AI as we see it today, and it triggers way before we even get to the technology: It’s ill-aligned business models. AI today, especially generative AI, is pretty much a money losing proposition: Lots of potential, but not enough money to be made, at least for now.
(Sidetrack: To get a better impression of the scale, see Exponential View’s notes on $500 billion annual revenue gap between infrastructure investment and earnings; OpenAI CEO Sam Altman once described the compute costs for ChatGPT as “eye-watering”; every time someone actually uses a service, it incurs costs to the company, so the more users, the higher the losses. This is just for running the service. Training new models is getting ridiculously expensive, and we don’t yet know how much it will costs to acquire/license/generate training data going forward as copyright lawsuits are ongoing. From the outside, it doesn’t currently look like there’ll be a break even point anytime soon.)
Once the venture capital runs low — which tends to happen first slowly, then very quickly as follow-on investments fail to manifest —AI companies will be in a squeeze to monetize hard and fast. And the only obvious business models are based on incentives that point towards quite undesirable outcomes.
It’s the business models, stupid!
Let’s assume, investors turn up the heat for AI companies to monetize, to aim for profitability. What options are there, especially if we look at the most-hyped area right now (generative AI) and the one I personally think might be the next hype right after (AI agents)? For the first, it’s more obvious, for the latter a little more speculative.
Generative AI’s potential business models
- Subscription models for generative AI tools like Midjourney and ChatGPT are pretty straight forward. They are, however, estimated not to be profitable. There might be some areas where professional tools might be possible to run at a profit, say a copy-writing tool for agencies or the like: Enterprise pricing might work, and be not too problematic. It’s a niche. Possibly a large niche, but still.
- Integration of AI tools into existing tool suits like Microsoft Office or Google Docs might pay some licensing fees, but it’s entirely unclear if this might be a sustainable business. I doubt it, the big companies will just absorb the tools at some points (or in the case of Microsoft, they co-own OpenAI anyway.)
- Take social/companion chats, one of the hot areas of applying generative AI. Besides potentially being a bit sleazy, these services would most likely make money from subscriptions, which requires them to keep users engaged. Engagement-based business models online were the root of many of the issues that got us into trouble with social media. So did running targeted ads against the contents discussed on the platform. So we might be committing the same mistakes we did with social media platforms all over again. (Let’s not.)
AI agent’s potential business models
AI agents, or more correctly, agentic AI tools, refer to AI tools that can understand complex instructions and solve complex issues that require taking action on the user’s behalf outside of a controlled environment. Think of a travel booking AI agent that you tasked with booking trains from A to B in second class and a boutique hotel near train station B for under $200 or something as complex as this.
Now, AI agents are still in their very early stages, so we’re in a much more speculative range here. Still, I’ll share some considerations, based on two assumptions:
- AI companies will try/need to monetize aggressively;
- If in doubt, we’ll be presented with the Budget Airline Deployment (B.A.D.) version of any product online, which while technically fulfilling the job at hand will do so in the shabbiest way in order to get users to spend more money on getting to the better version.
Which would lead us, almost unavoidably I think, to a scenario like this, which for simplicity’s sake I’ll just use the travel agent example for:
Scenario 1: Freemium AI agents. The user’s free or freemium travel booking agent searches travel and accommodation options across a number of pre-listed (i.e. paying) partner platforms, and “negotiates” the lowest possible deal. “Lowest possible” here would be a choice among pre-listed options controlled by the partner companies, i.e. possibly artificially inflated prices if the service running the AI agent is large enough. It would then also aggressively upsell the user against these booking options. It would be like booking any budget airline, just for everything. Worst of both worlds. The interests of the user and the AI agent (or their parent company) aren’t aligned at all, it just happens to be a new interface that obscure the underlying transactions even more than other booking systems.
Scenario 2: Premium AI agents. Users in the upper range of premium services buy access to higher-end, more independent AI agents. If you drive a Mercedes, this is the AI you’re likely to use. That AI agent accesses a wider range of booking services and isn’t limited to pre-selected commercial partners, or at least to a lesser degree. While it costs more upfront, it can negotiate a wider range of options, and negotiate better. It’s also largely intransparent, but is more aligned with the user’s interest. Here we see more of a social/class issue than actual alignment issues, but it’s accessible to a much smaller subset of the population. AI reinforces societal inequalities.
So these examples are a little blunt and over-simplified, but I think the underlying dynamics are likely to play out along those lines.
And frankly, this isn’t the AI utopia we were promised, nor is it the world I’d like to see. We can do better, and we must do better.
In order to do so, we need cut through the hype, and be honest about what AI can do — what it can do today, and what it plausibly can do in one year, or in five. As long as the “real” version keeps moving just beyond the horizon, we cannot have a debate about the real implications of AI. But we know that venture capital will run low eventually, because this isn’t new — it’s always in cycles, and this will be no different. Until then we should be weary about putting all our proverbial eggs in one basket and rather work with what we know to be true: That today, AI is a money-losing proposition. That AI tools can work beautifully in some areas and pretty badly in others: That they’re good for generating certain types of written and image content, bad for the environment, most likely bad for journalism, and so far pretty mediocre if not bad for search.
Until we solve the business model alignment issue, the other pieces cannot fall into place.