The conversation around artificial intelligence has hit a new phase. After a period of explosive growth and unbridled optimism, the AI sector is now facing a more sober reality. While there's no single event like a specific MIT report causing a market crash, the tech sector is indeed experiencing a broader AI reality check. This shift isn't about a single report but rather a confluence of factors, from the technical limitations of current models to the immense costs of scaling them.
One of the central issues is the gap between AI's public perception and its actual capabilities. For many, large language models (LLMs) like ChatGPT represent a giant leap toward human-like intelligence.
One of the central issues is the gap between AI's public perception and its actual capabilities. For many, large language models (LLMs) like ChatGPT represent a giant leap toward human-like intelligence.
However, as critics point out, these models are essentially sophisticated pattern recognition engines. They're brilliant at processing vast amounts of data and generating plausible text, but they often lack true reasoning, logic, or a fundamental understanding of the world. In short, they are no match for human potential on the same fronts.
For example, a human can instantly recognize a mango with a simple glance, relying on a lifetime of learned context and intuitive processing. An AI model, however, has to go through a much more resource-intensive process to make the same decision: it requires a high-resolution image, vast amounts of training data, and immense computational power to analyse the visual information; this process consumes significant amounts of electricity and water to cool the servers, highlighting the high cost of what seems like a simple task.
A classic example of this is the "hallucination" problem, where an LLM confidently fabricates information. In addition, even the latest model that grabbed headlines recently makes mistakes: an economist told me when he asked about the economic situation in Sri Lanka on 21 August, 2025, the response made had been based on the economic condition in 2022, when the country was declared bankrupt.
A classic example of this is the "hallucination" problem, where an LLM confidently fabricates information. In addition, even the latest model that grabbed headlines recently makes mistakes: an economist told me when he asked about the economic situation in Sri Lanka on 21 August, 2025, the response made had been based on the economic condition in 2022, when the country was declared bankrupt.
When it was made aware of the flaw in the response, it quickly scraped the internet for the latest data quickly and rectified the mistake. Of course, the model in question would not make the same mistake again as it has 'learned' from the mistake.
While it's true that a model can be corrected on a specific point, this doesn't mean it has "learned" in a human sense. It has simply been finetuned to provide a different response for that specific prompt. This distinction is crucial; it shows that models are not "on a perpetual learning curve" from every user interaction, but rather are trained on massive, static datasets and then updated with targeted corrections.
The issue of data currency also remains. While some models now have real-time data access, many still rely on training data that is months or even years old, leading to factual errors.
Beyond technical limitations, the financial and environmental costs of AI are becoming a significant point of discussion. The sheer scale required to train and run these models is staggering. They require:
Massive computational power: Training a single state-of-the-art model can cost tens of millions of dollars and require thousands of specialized processors.
Enormous energy consumption: The data centres that house these models use huge amounts of electricity. As models get bigger, their energy footprint grows, raising concerns about sustainability.
The issue of data currency also remains. While some models now have real-time data access, many still rely on training data that is months or even years old, leading to factual errors.
Beyond technical limitations, the financial and environmental costs of AI are becoming a significant point of discussion. The sheer scale required to train and run these models is staggering. They require:
Massive computational power: Training a single state-of-the-art model can cost tens of millions of dollars and require thousands of specialized processors.
Enormous energy consumption: The data centres that house these models use huge amounts of electricity. As models get bigger, their energy footprint grows, raising concerns about sustainability.
Data centres also require vast amounts of water to cool their servers, adding another layer to the environmental discussion.
These immense costs are forcing many companies to re-evaluate their AI strategies. It's not a matter of a single company "downsizing its AI sector" but rather a broader strategic pivot. Tech giants like Meta and Google, while still heavily invested in AI, are focusing on more practical, cost-effective applications rather than simply chasing bigger and bigger models.
The AI "boom" isn't over, but it is maturing. The initial excitement is giving way to a more pragmatic approach. The focus is shifting from "What amazing thing can this model do?" to "How can we make this model useful, safe, and cost-effective?"
These immense costs are forcing many companies to re-evaluate their AI strategies. It's not a matter of a single company "downsizing its AI sector" but rather a broader strategic pivot. Tech giants like Meta and Google, while still heavily invested in AI, are focusing on more practical, cost-effective applications rather than simply chasing bigger and bigger models.
The AI "boom" isn't over, but it is maturing. The initial excitement is giving way to a more pragmatic approach. The focus is shifting from "What amazing thing can this model do?" to "How can we make this model useful, safe, and cost-effective?"
The current challenges are forcing the industry to address fundamental issues: making AI more reliable, transparent, and sustainable. As the hype subsides, the real, long-term work of building useful and responsible AI is just beginning.