The future of AI has nothing to do with chatbots
AI researchers believe the industry's fixation on large language models has created a kind of tunnel vision obscuring the path to truly intelligent machines

Benjamin Girette / Bloomberg via Getty Images
A version of this article originally appeared in Quartz’s AI & Tech newsletter. Sign up here to get the latest AI & tech news, analysis and insights straight to your inbox.
Yann LeCun helped create the technology that powers ChatGPT. Now he's betting his next act on the idea that large language models are a dead end.
In November, LeCun stepped away from Meta $META after more than a decade as the company's chief AI scientist. The timing was notable. Just six months earlier, Mark Zuckerberg had poured billions into a new research lab dedicated to building “superintelligence” using the same large language model approach that LeCun had grown increasingly skeptical of.
To lead it, Zuckerberg tapped Scale AI's Alexandr Wang, a 28-year-old who had built a data-labeling company, not an AI research program. Rather than stick around as the elder statesman in someone else's lab, the 65-year-old Turing Award winner decided to start fresh.
His new venture, Advanced Machine Intelligence Labs, launched its website in January with a pointed declaration: "Real intelligence does not start in language. It starts in the world."
LeCun isn't alone in this conviction. A growing chorus of prominent AI researchers believes the industry's fixation on language models has created a kind of tunnel vision and that the path to truly intelligent machines runs through something called "world models."
The case against chatbots
The problem with today's AI systems, LeCun has argued for years, is that they don't actually understand anything. Large language models are trained to predict what word comes next in a sequence.
They've gotten remarkably good at this, good enough to write poetry, debug code, and pass medical licensing exams. But they're fundamentally pattern-matching machines that have no internal representation of how reality works.
This shows up in obvious ways. Ask a video generation AI to show someone setting down a cup of coffee and picking it up a minute later, and the cup might change color, move across the table, or vanish entirely. The AI has no object permanence — a cognitive skill most children master by age one.
LeCun believes this limitation isn't just an engineering problem to be solved with more data and bigger models. In his view, current systems can't plan ahead because they don't understand cause and effect in the physical world.
They've never touched anything, never navigated a room, never dealt with the consequences of their actions. They've read everything but experienced nothing.
A different vision is taking shape
What LeCun and others propose instead are AI systems built around internal models of how the world actually works. Think of how you can imagine reaching for a coffee mug before you do it, predicting that it will be warm and heavy, anticipating how your arm needs to move.
Today's AI can write a poem about coffee. It can't pour you a cup. That's the kind of understanding these researchers want to build into machines.
The concept has attracted serious talent and money. Fei-Fei Li, sometimes called the "godmother" of AI, left Stanford to found World Labs, which recently launched a product called Marble that generates explorable 3D environments from text prompts.
Google $GOOGL DeepMind has developed Genie 3, a system that creates photorealistic virtual worlds where AI agents can learn through trial and error. Nvidia $NVDA's Jensen Huang has championed world models as the key to "physical AI" that can operate robots and autonomous vehicles. Even Elon Musk's xAI has joined the race, hiring staff from Nvidia to build world models for video games.
Still, for all the enthusiasm, world models remain a side bet. The biggest checks in AI are still going to language model companies. OpenAI, Anthropic, and Google are pouring tens of billions into scaling up the very approach LeCun says is doomed.
LeCun isn't the first to raise these concerns, and he's far from alone. Researchers have questioned whether language models can achieve true intelligence since GPT-3's debut, and the world models concept dates back decades.
World models face their own obstacles. Building accurate simulations is expensive, and it's unclear whether virtual training environments can ever capture the full messiness of reality. There's also no guarantee that skills learned in simulation will transfer cleanly to the physical world.
But LeCun does have a history of backing unfashionable ideas. In the 1980s, he championed neural networks when much of the field had moved on. And if he's right about world models, Meta may end up buying what it refused to build. LeCun has already floated his old employer as AMI's first potential customer.