Is The AGI Hype Simply A Delaying Tactic?
In a few years' time, what follows may come back to haunt me (I doubt it, but maybe); but, it seems blindingly obvious to me that we are nowhere near AGI. The intellect of the people running the likes of OpenAI and Anthropic is far superior to mine, and they work with the technology every day, so why don’t they see that?
Then it hit me: they know full well that AGI is probably not achievable. Even if it is possible, it will not materialise for many years to come. The problem is they need a blue-sky vision to keep the funding flowing while they search for the killer use case.
AI has been incorporated into systems for years. Google Rank Brain (2015) is just one example, but it seems not until LLMs reached general public consciousness did the hype start to grow about rapid progress towards AGI.
Now, I am not suggesting the motives behind those CEOs promoting AGI are sinister. I think they truly believe in the technology but are acutely aware their window of opportunity is short. There is no killer use case, and until there is, their numbers don’t stack up.
We are accelerating towards the end of the AI bubble far faster than the DotCom bubble. The internet boom started around 1995; the bubble burst around 2000. ChatGPT-3.5 release in late 2022 started the AI boom (in reality, the LLM boom), and here we are in late 2024 with rumblings already of the bubble bursting.
The internet was foundational; LLMs are not. They offer significant benefits and advancements but operate atop the foundational layer provided by the internet. The fundamental use case of the internet was obvious, although later developments were not.
To be clear, I use LLMs a lot; they save me significant time on many tasks. They have made some of the apps I previously used obsolete. I use NotebookLM (fantastic), currently for free. I pay a massive monthly fee (£20) to use ChatGPT (again, fantastic). I pay low monthly fees for a couple of other deep-learning-driven tools.
The ChatGPT user base appears significant, but how many are paid subscribers? I don’t know. What’s the churn? Again, I don’t know, but I suspect it is high. There are API fees and enterprise applications. The total projected revenue for ChatGPT this FY is around $4 billion.
Where’s the growth? Sorry, but unless something else comes along, I don’t see it. Let’s say ChatGPT gains another 10 million users; that’s roughly another $2.5 billion in revenue, per annum - not bad. They could ramp up subscription costs, but I doubt that would fly.
There are API fees, which I suspect will be a major growth area, but its current revenue (it appears) lags that of subscriptions. Let’s be generous and add a further $2.5 billion in revenue there. Enterprise, if Copilot is a valid example, is not going well. Let’s stick a finger in the air and show growth of $1 billion there. The total projected revenue for next FY then is $10 billion.
In 2024, it is estimated OpenAI will spend over $5 billion on training and inference alone. Then there is the spending on infrastructure expected to be in the low billions of dollars. Around $10 billion in revenue is not going to work.
AI agents won’t cut it; I think the market has seen through that one already. Given a few years, they might be great, but today, no. How do you define what is and what is not an agent anyway?
Of course, it’s true that nobody could have predicted the applications enabled by the internet infrastructure. Perhaps the same is true for LLMs. Companies like OpenAI and Anthropic had better hope the AGI hype holds up just long enough for significant use cases to emerge, or we may never know.
If there is an AI (LLM) bubble, what happens to the infrastructure? That’s what I will return to next.



"If there is an AI (LLM) bubble, what happens to the infrastructure? That’s what I will return to next."
Is there any use for millions of high-powered GPUs?
There will be many useless data centres in the wrong places.
LLMs do not need to be near their users, almost all other technologies need to be close to their users.
LLMs response latency is of the order of tens of seconds, All other technologies need to respond in one to two to three seconds. Communication latency doesn't matter for LLMs, but it is critical for all other technologies.