The doomsayers of AI are having their moment. They correctly point out that the rapid progress of large language models has slowed. Context windows remain limited, hallucinations persist, and bigger models no longer guarantee smarter ones. From this, they conclude that the age of AI breakthroughs is ending.
They are mistaking the engine for the journey.
History offers many parallels. When the internal combustion engine stopped getting dramatically better, innovation didn’t stop. That was when it really started. The real transformation came from everything built around it: road networks, trucking logistics, suburbs, the global supply chain. Likewise, the shipping container changed the world not through further improvements, but because it became the standard that reshaped ports, labor systems, and trade. When the core technology stabilizes, people finally start reimagining what to do with it.
This is the point we’ve reached with AI. The models are powerful, but most of their potential remains untouched. Businesses are still treating AI as a novelty, something to sprinkle on top of existing processes. Education systems, government workflows, healthcare administration; these are built as if nothing new has happened. We haven’t even begun to redesign for a world where everyone has a competent digital assistant.
The real question is not whether an AI can pass a medical exam. It’s how we organize diagnosis and care when every doctor has instant access to thousands of case studies. It’s not about whether an AI can draft an email. It’s about how office communication changes when routine writing takes seconds. The innovation now lies in application, not invention.
Limits are not the enemy. In fact, recognizing limits often helps creativity flourish. When designers accept that screen size on phones is fixed, they find smarter interfaces. We become inventive when the boundaries are clear. The same will happen with AI once we stop waiting for miracle upgrades and start asking better questions.
The real bottleneck is attention. Investment still flows heavily into training larger and larger models, chasing diminishing returns. Meanwhile, the tools that would actually change how people work or learn get far less support. It’s as if we are building faster trains while neglecting the tracks, stations, and maps.
There is a similar problem in education, where energy goes into protecting the structure of institutions while ignoring how learning could be improved. Just because we can do something well does not mean it is worth doing. And just because AI researchers can build a bigger model does not mean they should.
The most meaningful innovation is ready to happen. It is no longer about raw power, but about redesign. Once we shift our focus from models to uses, the next revolution begins.
