.
Image: J Studios/Getty Images
Large language models feel intelligent because they speak fluently, confidently, and at scale. But fluency is not understanding, and confidence is not perception. To grasp the real limitation of today’s AI systems, it helps to revisit an idea that is more than two thousand years old.
In The Republic, Plato describes the allegory of the cave: prisoners chained inside a cave can only see shadows projected on a wall. Having never seen the real objects casting those shadows, they mistake appearances for reality, and they are deprived of experiencing the real world.
Large language models live in a very similar cave.
LLMs do not see, hear, touch, or interact with reality. They are trained almost entirely on text: books, articles, posts, comments, transcripts, and fragments of human expression collected from across history and the internet. That text is their only input. Their only “experience.”
LLMs only “see” shadows: texts produced by humans describing the world. Those texts are their entire universe. Everything an LLM knows about reality comes filtered through language, written by people with varying degrees of intelligence, honesty, bias, knowledge, and intent.
Text is not reality: it is a human representation of reality. It is mediated, incomplete, biased, and wildly heterogeneous, often distorted. Human language reflects opinions, misunderstandings, cultural blind spots, and outright falsehoods. Books and the internet contain extraordinary insights, but also conspiracy theories, propaganda, pornography, abuse, and sheer nonsense. When we train LLMs on “all the text,” we are not giving them access to the world. We are giving them access to humanity’s shadows on the wall.
This is not a minor limitation. It is the core architectural flaw of current AI.
The prevailing assumption in AI strategy has been that scale fixes everything: more data, bigger models, more parameters, more compute. But more shadows on the wall do not equal reality.
Because LLMs are trained to predict the most statistically likely next word, they excel at producing plausible language, but not at understanding causality, physical constraints, or real-world consequences. This is why hallucinations are not a bug to be patched away, but a structural limitation.
As Yann LeCun has repeatedly argued, language alone is not a sufficient foundation for intelligence.
This is why attention is increasingly turning toward world models: systems that build internal representations of how environments work, learn from interaction, and simulate outcomes before acting.
Unlike LLMs, world models are not limited to text. They can incorporate time-series data, sensor inputs, feedback loops, ERP data, spreadsheets, simulations, and the consequences of actions. Instead of asking “What is the most likely next word?”, they ask a far more powerful question:
“What will happen if we do this?”
For executives, this is not an abstract research debate. World models are already emerging (often without being labelled as such), in domains where language alone is insufficient.
In all these cases, language is useful, but insufficient. Understanding requires a model of how the world behaves, not just how people talk about it.
This does not mean abandoning language models. It means putting them in their proper place.
In the next phase of AI:
In Plato’s allegory, the prisoners are not freed by studying the shadows more carefully: they are freed by turning around and confronting the source of those shadows, and eventually the world outside the cave.
AI is approaching a similar moment.
The organisations that recognise this early will stop mistaking fluent language for understanding and start investing in architectures that model their own reality. Those companies won’t just build AI that talks convincingly about the world: they’ll build AI that actually understands how it works.
Will your company understand this? Will your company be able to build its world model?
ABOUT THE AUTHOR
Enrique Dans has been teaching Innovation at IE Business School since 1990, hacking education as Senior Advisor for Digital Transformation at IE University, and now hacking it even more at Turing Dream. He writes every day on Medium.