VentureBeat Mar 10, 01:00 PM
The limits of bubble thinking: How AI breaks every historical analogy It’s always the same story: A new technology appears and everyone starts talking about how it’ll change everything. Then capital rushes in, companies form overnight, and valuations climb faster than anyone can justify. Then, many many months later, the warnings arrive, and people suddenly remember the dot-com crash or crypto.
You’ve probably seen it before. And if you have, you probably think AI is the next bubble. Humans are great at pattern-matching. We’ve evolved to see patterns, so when something familiar emerges, we instinctively map it onto the closest story we already know. We think we’ve seen it before, and we’re confident we know how it ends.
But that instinct can mislead us. AI feels like a bubble because we’re forcing something genuinely discontinuous into a familiar story. The idea that everything that rises quickly must ultimately collapse sounds prudent. But it doesn’t mean it’ll always be true.
Why markets keep overshooting
Every major technological shift produces the same outward symptoms: Inflated expectations, followed by high-visibility failure. Dot-com, mobile, and crypto all went through a phase where the world lost its sense of proportion.
Why does this keep happening? Because markets don’t have a framework for discontinuous change. Discounted cash flow models assume steady, stable growth, and comparable companies assume the category already exists. So people assume the near future looks like the recent past, but that doesn’t work when the underlying category itself is changing.
Most valuation tools are designed for incremental progress, so analysts look at quarterly forecasts and incremental improvements. They don’t know what to do with step changes, and they can’t model nonlinear adoption.
So when you see capital overshooting or extreme dispersion of outcomes, that’s the market trying to value decade-long bets using quarterly logic. (Which doesn’t work.) And that’s what a bubble actually is: An indication that no one yet knows how to price what’s coming. That uncertainty looks like invalidation, but it just exposes the limits of existing frameworks.
The category error we keep making
When something new arrives, we reach for comparisons.
AI is like electricity.
AI is like computers.
AI is like the internet.
AI is like mobile.
These comparisons are comforting because they all produced massive, economy-wide change, and attracted enormous capital. They changed how work got done.
They also share something deeper. Every one of those technologies extended human capability without replacing human cognition. Electricity powered machines, but humans still decided what to build. Computers processed data, but humans interpreted it. The internet moved information, but humans decided what mattered. Mobile put computing in your pocket, but human attention remained the scarce resource. In every case, human intelligence anchored everything. It was also the bottleneck.
AI is different because it performs cognitive work. And if that makes you