Summary: Wall Street is openly questioning if artificial intelligence will ever make money and academics are scrutinizing the progress AI companies claim to make. By now, calling the AI boom a bubble is hardly a contrarian thing to say. But are bubbles bad? And if so, why?
↓ Go deeper (12 min)
Everything is cyclical. History repeats itself, with variations. The details change, but the theme often stays the same.
During the Dot-com era, investments in internet-based companies skyrocketed. At the core was a new disruptive technology: the World Wide Web. Companies with a .com suffix could count on high evaluations. They prioritized growth over profitability in the hopes of capturing market share, leading to high burn rates and unsustainable business models. In 2000, the bubble bursted. Thousands of dot-com companies failed, trillions of dollars of market value got wiped out in just a few months, only a handful companies like Google and Amazon survived.
Many folks have argued in recent months (more eloquently that I ever could) that generative AI is another tech bubble in the making. I’d take it one step further and say: if AI only turns out to be half as impactful as the Internet, the chances we are in the midst of a bubble are about 99.99%.
Hear me out.
Why bubbles are bad
Contrary to popular belief, calling something a bubble is not the same as calling it useless. All a bubble is, is a period of inflated expectations.
At the heart of a tech bubble is always a new emerging technology, which is often ill understood by investors. Real or manufactured FOMO leads to overinvestment in companies whose business models aren’t sustainable in the long run. The bubble bursts when reality catches up.
One might ask: why are bubbles bad? This is a terrific question. In his paper Why the AI Hype is another Tech Bubble, Luciano Floridi explains:
“A tech bubble typically culminates in a market correction or crash, when reality fails to meet inflated expectations, leading to a rapid decline in asset values and often resulting in significant financial losses for investors, the failure of many companies within the sector, and ultimately an overreaction in terms of financial disinvestment and social disappointment. Tech bubbles are destructive in the short term. They are also wasteful in the medium-term, since the outcome is often an overreaction, rather than a reasonable adjustment. ”
When it comes to AI, overspending could lead to an extended period of little to no progress, better known as an ‘AI Winter’.
Expectations
It doesn’t help that today’s AI is really expensive to train. Companies need to collect exponentially more data and burn exponentially more compute to achieve linear returns.
To be clear, we’re not talking about financial returns here, but a handful of AI benchmarks. Benchmarks that have become the sole measure of progress, even though academics have pointed out they can be easily gamed and don’t necessarily translate to real world use cases.
The believers are convinced all of this investment upfront is going to be worth it. But the fact they have to justify it with grandiose visions instead of a strong underlying business case serves as a major red flag. In a recent investor pitch disguised as an essay, Sam Altman made one bold claim after another:
“It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”
And:
“Deep learning works, and we will solve the remaining problems.”
He is not the only one. Head of the AI devision at Microsoft, Mustafa Suleyman, has unironically said during a recent TED talk:
“Everything will soon be represented by a conversational interface. Or, to put it another way, a personal AI. And these AIs will be infinitely knowledgeable, and soon they’ll be factually accurate and reliable. They’ll have near-perfect IQ.”
And then there’s the employees, who describe interacting with the latest OpenAI model, o1, as a “spiritual experience”. Another data point that insiders are increasingly detached from reality.
Reality
To live up to the expectations, generative AI will have to completely transform law, healthcare, education, and pretty much any other area of business — which it currently isn’t.
A more critical examination of OpenAI’s new ‘reasoning’ model shows it remains unreliable and deeply opaque. Claims of it being capable enough to be deployed for medical diagnose were quickly debunked in a viral article by and , showing 4o is confidently misdiagnosing, while o1 is not only confidently misdiagnosing but also rationalizing its errors. Are we really as close to solving these problems as we are led to believe?
In reality, nobody knows whether the next round of scaling will get us there. The idea that everything will magically get better with scale is a myth and the biggest challenges for wide-scale adoption remain unchanged: cost, reliability, bias, and safety and security.
What is it they say again? Stupidity is doing the same thing over and over again and expecting different results.
Post-bubble realism
After the Dot-com bubble deflated, the Internet went on to transform every aspect of our lives. It gave rise to a rich and versatile digital economy, connecting global markets and people, from social networks like Facebook (now Meta), LinkedIn, and Twitter to disruptive new business models like Airbnb and Uber, and the birth of steaming.
If generative AI turns out to be just as transformational, overexcitement (while pre-mature) is warranted. At the same time, history teaches us that a bubble is a distinct possibility when a technology is heralded as the silver bullet to all our problems; from cancer to climate change.
Instead of inflating more hype, what we need right now is to focus on the problems AI actually can solve. The AI revolution may be a real thing, but it will likely only arrive after the bubble bursts.
Speak soon,
— Jurgen
Not done reading?
Here are some of my recent articles that you may have missed:
I personally think we are in a bubble simply for the fact that current AI systems literally take no account for the experience of the human operator, always just focusing on new features, faster generation, things like that. At some point, we as humans just grow weary of its use, and then we experience fatigue with the technology, and following we simply start to ignore it. We have seen this time and time again.
People are starting to experience AI Fatigue as a legitimate phenomenon. As someone who is starting to experience it myself (I don’t want AI in my washing machine thank you very much) I have found a few case studies into researching my own conditions into it.
AI Fatigue: A Study into the Impact of Artificial Intelligence on Employee Fatigue
https://www.amazon.com//dp/B0D2BQV1DC
there are a couple others too, but just my two cents.
I will start with the quote, "Reality always wins; your job is to get in touch with it!" Reality will win again in this case once the day of reckoning comes. Generative AI is too costly to build and maintain, so sooner rather than later, these companies, including Magnificent 7, have to justify all capital expenditures. Without products with real-life applications, it would be hard to maintain this level of expenditures for long.
The AI industry is heading towards a bubble akin to the dotcom era, where inflated expectations and overinvestment are rampant. Reality will eventually prevail, and we'll see the same boom and bust cycle unless another "AI winter" sets in, halting progress. Most AI companies won't survive this cycle, but a few winners will emerge. I also don't see much differentiation among the various LLM model companies; the results from different models are starting to look very similar as training data is the same, and most likely, they are using each other's output to train their models. I also have a hard time understanding the moat of these companies if they are delivering similar outcomes. How many model companies would be needed?
We need to consider the AI bubble from two perspectives:
1. Bubbles are bad because they burst, leading to financial losses and wasted resources. Many people and companies will lose money.
2. However, bubbles also provide funding to innovative endeavors that might not attract investment otherwise. Even failed projects can teach valuable lessons and eventually lead to tangible advancements, much like the dotcom bubble did.
The current state of the AI bubble is uncertain. It will remain so until we see either the failure of prominent AI startups, leading to a domino effect across the industry, or the emergence of a few products that can be practically applied in real-life situations.
After using LLMs for over 15 months, I see some value in current offerings, but I can't justify the cost of a monthly subscription for my organization. In an experiment with 17 people in my organization, I provided access to multiple LLMs through a Poe license. Despite offering training and use cases, only 2 people use the tools regularly. This low adoption suggests that people either don't have tasks where AI can assist on regular basis or tools are not intuitive enough. I also often feel it is a half-baked, overhyped as revolutionary but underdelivering on promises.
Interestingly, we've had better luck with GitHub Copilot, which sees higher usage.
In conclusion, I'll quote Peter Thiel:
“. . . the bond bubble, the tech bubble, the stock bubble, the emerging markets bubble, the housing bubble. . . One by one they had all burst, and their bursting showed that they had been temporary solutions to long-term problems, maybe evasions of those problems, distractions. With so many bubbles—so many people chasing ephemera, all at the same time—it was clear that things were fundamentally not working.”