Yann LeCun on Lex Fridman’s Podcast: The Road to AGI Runs Through Open Source AI

NYU Center for Data Science
3 min readApr 5, 2024

--

In a provocative podcast episode, Yann LeCun, CDS founding director and a trailblazing AI researcher, made a bold claim: today’s large language models (LLMs) won’t take us all the way to artificial general intelligence (AGI). Speaking with Lex Fridman, LeCun argued that while LLMs like GPT-4 and Meta’s own LLaMA models are useful and will enable many applications, they are fundamentally limited.

LeCun believes that LLMs lack the key capabilities of truly intelligent systems, such as reasoning, planning, and persistent memory. “LLMs can do none of those or they can only do them in a very primitive way, and they don’t really understand the physical world,” he said. “As a pass towards human-level intelligence, they’re missing essential components.”

LeCun highlighted the sheer volume of information humans absorb through sensory experience compared to language alone. “What that tells you is that through sensory input, we see a lot more information than we do through language, and that despite our intuition, most of what we learn and most of our knowledge is through our observation and interaction with the real world, not through language,” he explained. “Everything that we learn in the first few years of life, and certainly everything that animals learn has nothing to do with language.”

While some argue that language alone contains enough compressed knowledge to enable reasoning and world understanding, LeCun is skeptical. He believes AI systems need to be grounded in reality, even if simulated, to acquire the rich mental models humans use for physical tasks. “There’s a lot of tasks that we accomplish where we manipulate a mental model of the situation at hand, and that has nothing to do with language,” LeCun said. “Everything that’s physical, mechanical, whatever, when we build something, when we accomplish a task, model task of grabbing something, et cetera, we plan or action sequences, and we do this by essentially imagining the result of the outcome of a sequence of actions that we might imagine and that requires mental models that don’t have much to do with language.”

LeCun’s vision for the future of AI revolves around joint embedding predictive architectures (JEPA) that learn abstract representations of the world, enabling planning and reasoning. He recommends abandoning autoregressive generation and embracing energy-based models trained with regularization. LeCun also emphasizes the importance of open source AI. “I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else,” he warned, arguing that open platforms are essential for fostering a diverse ecosystem of AI systems representing different languages, cultures, and values.

While LeCun believes AGI is still far off and will emerge gradually rather than suddenly, he is optimistic about AI’s potential to amplify human intelligence. “We can make humanity smarter with AI,” he said. “AI basically will amplify human intelligence. It’s as if every one of us will have a staff of smart AI assistants.” Just as the printing press revolutionized access to knowledge, LeCun believes AI assistants could be a transformative force for humanity — but only if we embrace open platforms and resist the temptation to restrict access in the name of safety or protecting entrenched interests.

As one of the key figures shaping the future of AI, LeCun’s perspective is sure to spark further debate and shape the trajectory of the field. But one thing is clear: in LeCun’s vision, the path to AGI runs through open source development and AI systems deeply grounded in the physical world.

By Stephen Thomas

--

--

NYU Center for Data Science
NYU Center for Data Science

Written by NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.

Responses (2)