Summary: Wall Street is openly questioning if artificial intelligence will ever make money and academics are scrutinizing the progress AI companies claim to make. By now, calling the AI boom a bubble is hardly a contrarian thing to say. But are bubbles bad? And if so, why?
↓ Go deeper (12 min)
Everything is cyclical. History repeats itself, with variations. The details change, but the theme often stays the same.
During the Dot-com era, investments in internet-based companies skyrocketed. At the core was a new disruptive technology: the World Wide Web. Companies with a .com suffix could count on high evaluations. They prioritized growth over profitability in the hopes of capturing market share, leading to high burn rates and unsustainable business models. In 2000, the bubble bursted. Thousands of dot-com companies failed, trillions of dollars of market value got wiped out in just a few months, only a handful companies like Google and Amazon survived.
Many folks have argued in recent months (more eloquently that I ever could) that generative AI is another tech bubble in the making. I’d take it one step further and say: if AI only turns out to be half as impactful as the Internet, the chances we are in the midst of a bubble are about 99.99%.
Hear me out.
Why bubbles are bad
Contrary to popular belief, calling something a bubble is not the same as calling it useless. All a bubble is, is a period of inflated expectations.
At the heart of a tech bubble is always a new emerging technology, which is often ill understood by investors. Real or manufactured FOMO leads to overinvestment in companies whose business models aren’t sustainable in the long run. The bubble bursts when reality catches up.
One might ask: why are bubbles bad? This is a terrific question. In his paper Why the AI Hype is another Tech Bubble, Luciano Floridi explains:
“A tech bubble typically culminates in a market correction or crash, when reality fails to meet inflated expectations, leading to a rapid decline in asset values and often resulting in significant financial losses for investors, the failure of many companies within the sector, and ultimately an overreaction in terms of financial disinvestment and social disappointment. Tech bubbles are destructive in the short term. They are also wasteful in the medium-term, since the outcome is often an overreaction, rather than a reasonable adjustment. ”
When it comes to AI, overspending could lead to an extended period of little to no progress, better known as an ‘AI Winter’.
Expectations
It doesn’t help that today’s AI is really expensive to train. Companies need to collect exponentially more data and burn exponentially more compute to achieve linear returns.
To be clear, we’re not talking about financial returns here, but a handful of AI benchmarks. Benchmarks that have become the sole measure of progress, even though academics have pointed out they can be easily gamed and don’t necessarily translate to real world use cases.
The believers are convinced all of this investment upfront is going to be worth it. But the fact they have to justify it with grandiose visions instead of a strong underlying business case serves as a major red flag. In a recent investor pitch disguised as an essay, Sam Altman made one bold claim after another:
“It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”
And:
“Deep learning works, and we will solve the remaining problems.”
He is not the only one. Head of the AI devision at Microsoft, Mustafa Suleyman, has unironically said during a recent TED talk:
“Everything will soon be represented by a conversational interface. Or, to put it another way, a personal AI. And these AIs will be infinitely knowledgeable, and soon they’ll be factually accurate and reliable. They’ll have near-perfect IQ.”
And then there’s the employees, who describe interacting with the latest OpenAI model, o1, as a “spiritual experience”. Another data point that insiders are increasingly detached from reality.
Reality
To live up to the expectations, generative AI will have to completely transform law, healthcare, education, and pretty much any other area of business — which it currently isn’t.
A more critical examination of OpenAI’s new ‘reasoning’ model shows it remains unreliable and deeply opaque. Claims of it being capable enough to be deployed for medical diagnose were quickly debunked in a viral article by and , showing 4o is confidently misdiagnosing, while o1 is not only confidently misdiagnosing but also rationalizing its errors. Are we really as close to solving these problems as we are led to believe?
In reality, nobody knows whether the next round of scaling will get us there. The idea that everything will magically get better with scale is a myth and the biggest challenges for wide-scale adoption remain unchanged: cost, reliability, bias, and safety and security.
What is it they say again? Stupidity is doing the same thing over and over again and expecting different results.
Post-bubble realism
After the Dot-com bubble deflated, the Internet went on to transform every aspect of our lives. It gave rise to a rich and versatile digital economy, connecting global markets and people, from social networks like Facebook (now Meta), LinkedIn, and Twitter to disruptive new business models like Airbnb and Uber, and the birth of steaming.
If generative AI turns out to be just as transformational, overexcitement (while pre-mature) is warranted. At the same time, history teaches us that a bubble is a distinct possibility when a technology is heralded as the silver bullet to all our problems; from cancer to climate change.
Instead of inflating more hype, what we need right now is to focus on the problems AI actually can solve. The AI revolution may be a real thing, but it will likely only arrive after the bubble bursts.
Speak soon,
— Jurgen
Not done reading?
Here are some of my recent articles that you may have missed:
I personally think we are in a bubble simply for the fact that current AI systems literally take no account for the experience of the human operator, always just focusing on new features, faster generation, things like that. At some point, we as humans just grow weary of its use, and then we experience fatigue with the technology, and following we simply start to ignore it. We have seen this time and time again.
People are starting to experience AI Fatigue as a legitimate phenomenon. As someone who is starting to experience it myself (I don’t want AI in my washing machine thank you very much) I have found a few case studies into researching my own conditions into it.
AI Fatigue: A Study into the Impact of Artificial Intelligence on Employee Fatigue
https://www.amazon.com//dp/B0D2BQV1DC
there are a couple others too, but just my two cents.
Great insides! Especially your conclusion that we should focus more on the „why“ of AI use cases. Every company now needs to have some AI to be relevant these days. The quality of the LLMs and user satisfaction seem to be far less important.
How should we protect this bubble form bursting in your opinion?