MIT Technology Review features Yann LeCun’s vision for the future of machine learning
The AI researcher shares his innovative vision for the future of machine learning
Yann LeCun, CDS faculty & founding Director and Chief AI Scientist at Meta’s AI lab, wants machines to operate with common sense. In a recent article by MIT Technology Review titled, “Yann LeCun has a bold new vision for the future of AI,” the influential AI researcher describes his approach to getting there. The piece, written by Melissa Heikkilä and Will Douglas Heaven, explains how LeCun’s vision could open the door for advancements in artificial general intelligence (AGI), machines that can reason, plan, and learn like humans. Since the article’s release, LeCun has posted a position paper online titled “A Path Towards Autonomous Machine Intelligence” that outlines the architecture and training paradigms he thinks are necessary to build independently intelligent technology.
In addition to his work for the technology conglomerate, LeCun is the Silver Professor of Data Science, Computer Science, Neural Science, and Electrical and Computer Engineering at New York University. He directed NYU’s initiative in data science from 2012 to 2014 and was a founding director of the NYU Center for Data Science. In 2018, he was a recipient of the ACM Turing Award, which recognizes lasting contributions to the field of computer science, for his groundbreaking work on deep learning.
In LeCun’s research, “common sense” refers to the intuitive reasoning used by humans and animals. He describes what he refers to as a “world model” in which animal brains simulate the world around them. In a world model, animals make guesses based on their observations, which then enables them to draw conclusions from incomplete pieces of information. For example, if we hear glass shattering from the kitchen, we can make an educated guess that someone dropped a cup.
While this cognitive process comes naturally to humans, teaching this type of reasoning to machines is more challenging. Current neural networks can’t draw out these sorts of patterns without being shown thousands of images. LeCun’s new approach is based on a neural network that would be able to process the world at various levels of detail.
“Ditching the need for pixel-perfect predictions, this network would focus only on those features in a scene that are relevant for the task at hand,” said the article in MIT Technology Review. “LeCun proposes pairing this core network with another, called the configurator, which determines what level of detail is required and tweaks the overall system accordingly.”
LeCun has built an early version of his world model that is capable of basic object recognition and is currently working on teaching it to make predictions. His vision places the world model and the configurator as two key elements in a larger system or cognitive architecture that would include other neural networks. This approach raises significant questions about real-world application because as LeCun points out in the article, he doesn’t know how to build what he is describing. “We need to figure out a good recipe to make this work, and we don’t have that recipe yet,” said LeCun for MIT Technology Review.
There is an ongoing consideration for how this kind of technology would function in society, and the development of human-level intelligence is still far off. LeCun hopes his work will give others the ability to continue moving the research forward. “This is something that is going to take a lot of effort from a lot of people,” said LeCun in the article. “I’m putting this out there because I think ultimately this is the way to go.”
By Meryl Phair