I personally think we are in a bubble simply for the fact that current AI systems literally take no account for the experience of the human operator, always just focusing on new features, faster generation, things like that. At some point, we as humans just grow weary of its use, and then we experience fatigue with the technology, and following we simply start to ignore it. We have seen this time and time again.
People are starting to experience AI Fatigue as a legitimate phenomenon. As someone who is starting to experience it myself (I don’t want AI in my washing machine thank you very much) I have found a few case studies into researching my own conditions into it.
AI Fatigue: A Study into the Impact of Artificial Intelligence on Employee Fatigue
I will start with the quote, "Reality always wins; your job is to get in touch with it!" Reality will win again in this case once the day of reckoning comes. Generative AI is too costly to build and maintain, so sooner rather than later, these companies, including Magnificent 7, have to justify all capital expenditures. Without products with real-life applications, it would be hard to maintain this level of expenditures for long.
The AI industry is heading towards a bubble akin to the dotcom era, where inflated expectations and overinvestment are rampant. Reality will eventually prevail, and we'll see the same boom and bust cycle unless another "AI winter" sets in, halting progress. Most AI companies won't survive this cycle, but a few winners will emerge. I also don't see much differentiation among the various LLM model companies; the results from different models are starting to look very similar as training data is the same, and most likely, they are using each other's output to train their models. I also have a hard time understanding the moat of these companies if they are delivering similar outcomes. How many model companies would be needed?
We need to consider the AI bubble from two perspectives:
1. Bubbles are bad because they burst, leading to financial losses and wasted resources. Many people and companies will lose money.
2. However, bubbles also provide funding to innovative endeavors that might not attract investment otherwise. Even failed projects can teach valuable lessons and eventually lead to tangible advancements, much like the dotcom bubble did.
The current state of the AI bubble is uncertain. It will remain so until we see either the failure of prominent AI startups, leading to a domino effect across the industry, or the emergence of a few products that can be practically applied in real-life situations.
After using LLMs for over 15 months, I see some value in current offerings, but I can't justify the cost of a monthly subscription for my organization. In an experiment with 17 people in my organization, I provided access to multiple LLMs through a Poe license. Despite offering training and use cases, only 2 people use the tools regularly. This low adoption suggests that people either don't have tasks where AI can assist on regular basis or tools are not intuitive enough. I also often feel it is a half-baked, overhyped as revolutionary but underdelivering on promises.
Interestingly, we've had better luck with GitHub Copilot, which sees higher usage.
In conclusion, I'll quote Peter Thiel:
“. . . the bond bubble, the tech bubble, the stock bubble, the emerging markets bubble, the housing bubble. . . One by one they had all burst, and their bursting showed that they had been temporary solutions to long-term problems, maybe evasions of those problems, distractions. With so many bubbles—so many people chasing ephemera, all at the same time—it was clear that things were fundamentally not working.”
Oh yeah? If AI is so bad, then how come I just asked ChatGPT how many "r"'s are in your article, and it said I should take my laptop across the river in the boat first, and then return for the cabbage?!
Great insides! Especially your conclusion that we should focus more on the „why“ of AI use cases. Every company now needs to have some AI to be relevant these days. The quality of the LLMs and user satisfaction seem to be far less important.
How should we protect this bubble form bursting in your opinion?
Good positive overview as ever, Jurgen. I wonder if the cost benefit of whatever we call AI is comparable to that of www outside of niche necessity based cases. Small websites can survive on tiny ad revenue.
A) are we about to get ads in chatbots? Are all chatbots about to be paid for or bundled with other stuff?
B) will ads even cover the cost of running a capable LLM (thinking about 750k dollars per day for ChatGPT)? Will less capable but cheaper LLMs be useful enough for people to flock to?
Dunno. Will companies realise that website chatbot integrations is money down the drain and cull spending or will they keep it going long enough out of administrative laziness...
Canva recently jacked up their prices with 300%, presumably to cover for the costs for all of its AI features. OpenAI is rumored to lose as much as $5 billion this year and Anthropic about $2.7 billion, even though their products are hugely popular. Either they are burning all their capital on R&D or they are offering their products at a steep discount, in order to grow or maintain market share.
Didn't know that about canva! Heard about other losses from Gary's mailing list.
I doubt it's RnD, although training new models definitely ain't cheap. I think they are underselling massively to attract numbers. Ed Zitron has a justifiable view on this: it's not about products and profits anymore, it's about growth and investment. Many users with heavy losses somehow magically still is a net positive so justifies continued investment, which is insane. That playbook is likely to be rewritten soon, of course, when OpenAI tries to raise more money than anybody else has ever tried to raise.
I personally think we are in a bubble simply for the fact that current AI systems literally take no account for the experience of the human operator, always just focusing on new features, faster generation, things like that. At some point, we as humans just grow weary of its use, and then we experience fatigue with the technology, and following we simply start to ignore it. We have seen this time and time again.
People are starting to experience AI Fatigue as a legitimate phenomenon. As someone who is starting to experience it myself (I don’t want AI in my washing machine thank you very much) I have found a few case studies into researching my own conditions into it.
AI Fatigue: A Study into the Impact of Artificial Intelligence on Employee Fatigue
https://www.amazon.com//dp/B0D2BQV1DC
there are a couple others too, but just my two cents.
I will start with the quote, "Reality always wins; your job is to get in touch with it!" Reality will win again in this case once the day of reckoning comes. Generative AI is too costly to build and maintain, so sooner rather than later, these companies, including Magnificent 7, have to justify all capital expenditures. Without products with real-life applications, it would be hard to maintain this level of expenditures for long.
The AI industry is heading towards a bubble akin to the dotcom era, where inflated expectations and overinvestment are rampant. Reality will eventually prevail, and we'll see the same boom and bust cycle unless another "AI winter" sets in, halting progress. Most AI companies won't survive this cycle, but a few winners will emerge. I also don't see much differentiation among the various LLM model companies; the results from different models are starting to look very similar as training data is the same, and most likely, they are using each other's output to train their models. I also have a hard time understanding the moat of these companies if they are delivering similar outcomes. How many model companies would be needed?
We need to consider the AI bubble from two perspectives:
1. Bubbles are bad because they burst, leading to financial losses and wasted resources. Many people and companies will lose money.
2. However, bubbles also provide funding to innovative endeavors that might not attract investment otherwise. Even failed projects can teach valuable lessons and eventually lead to tangible advancements, much like the dotcom bubble did.
The current state of the AI bubble is uncertain. It will remain so until we see either the failure of prominent AI startups, leading to a domino effect across the industry, or the emergence of a few products that can be practically applied in real-life situations.
After using LLMs for over 15 months, I see some value in current offerings, but I can't justify the cost of a monthly subscription for my organization. In an experiment with 17 people in my organization, I provided access to multiple LLMs through a Poe license. Despite offering training and use cases, only 2 people use the tools regularly. This low adoption suggests that people either don't have tasks where AI can assist on regular basis or tools are not intuitive enough. I also often feel it is a half-baked, overhyped as revolutionary but underdelivering on promises.
Interestingly, we've had better luck with GitHub Copilot, which sees higher usage.
In conclusion, I'll quote Peter Thiel:
“. . . the bond bubble, the tech bubble, the stock bubble, the emerging markets bubble, the housing bubble. . . One by one they had all burst, and their bursting showed that they had been temporary solutions to long-term problems, maybe evasions of those problems, distractions. With so many bubbles—so many people chasing ephemera, all at the same time—it was clear that things were fundamentally not working.”
Oh yeah? If AI is so bad, then how come I just asked ChatGPT how many "r"'s are in your article, and it said I should take my laptop across the river in the boat first, and then return for the cabbage?!
Oh....wait....
Great insides! Especially your conclusion that we should focus more on the „why“ of AI use cases. Every company now needs to have some AI to be relevant these days. The quality of the LLMs and user satisfaction seem to be far less important.
How should we protect this bubble form bursting in your opinion?
Good positive overview as ever, Jurgen. I wonder if the cost benefit of whatever we call AI is comparable to that of www outside of niche necessity based cases. Small websites can survive on tiny ad revenue.
A) are we about to get ads in chatbots? Are all chatbots about to be paid for or bundled with other stuff?
B) will ads even cover the cost of running a capable LLM (thinking about 750k dollars per day for ChatGPT)? Will less capable but cheaper LLMs be useful enough for people to flock to?
Dunno. Will companies realise that website chatbot integrations is money down the drain and cull spending or will they keep it going long enough out of administrative laziness...
Canva recently jacked up their prices with 300%, presumably to cover for the costs for all of its AI features. OpenAI is rumored to lose as much as $5 billion this year and Anthropic about $2.7 billion, even though their products are hugely popular. Either they are burning all their capital on R&D or they are offering their products at a steep discount, in order to grow or maintain market share.
Didn't know that about canva! Heard about other losses from Gary's mailing list.
I doubt it's RnD, although training new models definitely ain't cheap. I think they are underselling massively to attract numbers. Ed Zitron has a justifiable view on this: it's not about products and profits anymore, it's about growth and investment. Many users with heavy losses somehow magically still is a net positive so justifies continued investment, which is insane. That playbook is likely to be rewritten soon, of course, when OpenAI tries to raise more money than anybody else has ever tried to raise.
In the Valley, profit is for pussies.
https://tempo.substack.com/p/the-chief-ai-officer-accepts-his