Michael Burry, the iconoclast investor who famously predicted the 2008 housing collapse, has spent much of the last year betting against the Artificial Intelligence industry. Through his firm, Scion Asset Management, Burry placed massive bearish bets against market darlings like Nvidia and Palantir.
While news broke recently that Burry is liquidating his fund to return capital to investors, a sign he may be tired of fighting the market’s irrationality, his fundamental warning remains. And if you look away from the stock tickers and look at the forensic accounting, you realize something unsettling: Burry’s math might be right.
I spend my months interviewing AI startup founders and enterprise leaders. A year ago, the mood was electric. Today, it is frantic. The signal I am hearing is consistent: Investors and boards are demanding that AI products not only be revolutionary but also immediately cut costs. Yet, as we approach the end of 2025, with billions spent across industries, those CTOs, DevOps, and FinOps leaders are not seeing the promised savings. They are seeing a proliferation of errors.
We are witnessing a classic technology bubble: lots of enthusiasm, precious little immediate payback.
The $600 Billion Math Problem
To understand why the bubble is forming, you have to look at the “CapEx Gap.” David Cahn of Sequoia Capital recently highlighted the industry’s “$600 Billion Question.” The AI sector is spending roughly $600 billion annually on chips and data centers. To break even, they need to generate $600 billion in new annual revenue. The actual figure? Less than $100 billion.
So where is the “growth” coming from? Much of it appears to be a circular illusion.
We are seeing rampant “round-tripping.” A Big Tech giant invests $1 billion into an AI startup, but with a catch: the startup must spend that money on the investor’s cloud servers. The cash leaves the giant’s balance sheet as an “investment” and returns immediately as “revenue.” This isn’t organic market demand; it’s an accounting loop.
Burry’s thesis goes deeper into the plumbing. He argues that hyperscalers are artificially inflating profits by extending the “useful life” of their servers from three years to six on their books. In reality, AI chips run hot and burn out in two or three. Burry estimates this single accounting trick is hiding a $176 billion earnings illusion that will vanish the moment that hardware needs replacing.
The Productivity Paradox
I’ve emerged from a year of corporate “AI transformation” projects with a contrarian take that supports this bearish view: Artificial intelligence isn’t actually cutting costs.
What it is doing, in its best instances, is helping talented people work faster. But the promised cashable savings often fail to materialize once you account for the new costs and risks AI brings. In fact, in a recent survey, 95% of organizations reported no measurable ROI from their AI efforts.
Consider the difference between a factory speeding up its assembly line versus actually closing down a production floor and saving on wages. Most companies are seeing the former, not the latter.
Inside a large contact center I studied, management gave a generative AI assistant to about half the customer support agents. The results were impressive: the AI-assisted group handled 14% more customer issues per hour. Newer employees suddenly performed almost on par with veterans. Yet, when I spoke with the directors, there was no plan to reduce headcount.
Why? The extra capacity meant shorter wait times and fewer backlogs, not layoffs. The economic logic is straightforward: unless you plan to eliminate the work or the workers, time saved is just that—time saved, not dollars saved.
The Review Tax: Why AI Isn’t Free
So where did all those rosy “cost savings” projections go? They were swallowed by what I call the AI Review Tax.
AI has an uncanny ability to be confidently wrong. It will fabricate numbers, misinterpret nuances, or write code that almost works but contains a nasty bug. This means human experts must double-check AI-generated work in any high-stakes application. Those extra cycles of review and rework are the “tax” on the initial speed-up.
Ask the folks at Klarna. In 2024, their CEO proudly proclaimed an AI chatbot could replace 700 customer service reps. Fast forward a year: they quietly brought back human agents because the AI, while fast and fine for simple questions, delivered “lower quality” on the tougher issues. Klarna admitted they had “underestimated the tradeoff.”
In private, off-the-record conversations, some CTOs have confessed a dirty secret: they could have achieved the same end results using stable, 20-year-old database structures and decision-tree logic—technology that costs pennies compared to the millions they are currently torching on GPU compute.
We have effectively swapped reliable, cheap software for expensive, probabilistic slot machines.
A dozen CTOs have told me some variation of: “We tried having AI build an application. It generated lots of code, but we ended up having senior engineers redo most of it.” In one study, teams using an AI coding assistant actually had higher bug rates with no improvement in coding speed. The net effect? Sometimes a wash, sometimes even negative productivity.
Bubble Signs: Hype Over Substance
The disconnect between promise and reality is leading to some soul-searching. Even worse, the technology itself may be hitting a quality ceiling due to “Model Collapse.”
Researchers at Stanford found that GPT-4’s accuracy on specific tasks dropped significantly over a three-month period. As the internet floods with AI-generated “slop” (to use a term coined by a Guardian columnist), new models train on synthetic garbage rather than human data. It is a snake eating its own tail.
For businesses, the worry is a trust erosion: employees and customers will start to assume anything could be AI-generated and view it with extra skepticism. If the average business user consistently finds that using AI means more work verifying outputs than just doing it themselves, they will simply stop using it—pricking the AI bubble from within.
Making AI Earn Its Keep
Despite all this, I remain bullish on AI in the right contexts. The key is using AI as a force multiplier for capable teams, not as a plug-and-play replacement for them.
We see this with software teams that treat Copilot as a junior developer—helpful for boilerplate and suggestions, but always code-reviewed by a senior engineer. We see it in triage systems that categorize tickets so humans can solve them faster. This works.
But to any business leader feeling AI-FOMO, my advice is to embrace the skepticism of Michael Burry. Treat any claimed “hours saved” as a hypothesis until you see that time reallocated to profitable work. If a vendor pitches “end-to-end automation—no humans needed,” be very, very wary.
In the end, a sober mindset will serve us well. AI won’t magically slash costs overnight. But deployed wisely, it can boost productivity. The companies that get the most out of AI will be those that combine human judgment with machine speed, keep an eagle eye on quality, and stay honest about the numbers. The others will be stuck in the hype, wondering why the “AI revolution” boosted everyone’s workloads but not their profits.


