What Can AI Systems Learn When Given the Data from Just One Child?

NYU Center for Data Science
3 min readFeb 1, 2024

The efficiency of human language acquisition, achieved with relatively minimal data, starkly contrasts with AI systems, which often require vast corpuses of information. This fundamental disparity raises a crucial question: What can AI systems learn when given the data from just one child? This query forms the core of a new study, “Grounded language acquisition through the eyes and ears of a single child,” published in Science by Wai Keen Vong, a Research Scientist at CDS, with his Principal Investigator, CDS Assistant Professor of Psychology and Data Science Brenden Lake, as well as current CDS PhD student Wentao Wang and former CDS Research Scientist Emin Orhan.

Vong and Lake explain their study in 3 minutes

The study delves deep into the realms of machine learning and human language acquisition, challenging existing paradigms and opening new avenues of understanding.

The team’s research, focusing on the stark contrast between the data requirements of human learners and AI systems, employs a unique approach. “We’ve seen huge advances in AI systems that learn and use language. But they need billions, sometimes trillions of words. In contrast, humans learn from tens to hundreds of millions of words over their lifetime,” Vong explained. The study utilizes data from just a single child from a dataset called SAYCam, offering a new perspective on the learnability of language by AI systems. This approach aims to unravel the underlying mechanisms that enable humans to learn language so efficiently.

One of the study’s most important revelations is the potential overestimation of the role of certain cognitive biases or constraints — called, in humans, inductive biases — long thought to be indispensable for language acquisition. Vong and his co-authors’ study employs multimodal neural networks, which are essentially generic learning mechanisms, to demonstrate that such networks can discern mappings between words and their visual counterparts without any such built-in biases. This finding challenges the traditional view that more specific cognitive instructions are necessary for AI language learning. “We’re using these networks to feed in this data from a single child, demonstrating that learning these mappings is possible with real, genuine raw data,” Vong said.

Brenden Lake highlighted the broader implications of their work. “There have been remarkable recent advances in AI. The question we wanted to ask was, do these advances tell us anything about human learning and development?” Lake’s interest in the study stems from the desire to bridge the gap between AI advancements and our comprehension of human cognitive development. He noted that while AI models and humans may both learn to speak fluently, the pathways to this fluency could be fundamentally different, given the vast disparity in data requirements.

Lake also touched on the study’s contribution to ongoing debates in cognitive science. Children learn language swiftly with limited data, in contrast to AI’s extensive data needs. “Researchers have proposed many different ingredients and inductive biases to guide children in narrowing down possibilities. Our CVCL model puts all that aside—for now—and trains an AI system with none of those built-in biases, to see what is possible to learn,” Lake said, referring to the model this paper introduces, “Child’s View for Contrastive Learning.” This approach has demonstrated that a start in language learning is possible with today’s AI tools, using data similar to what a child experiences.

Vong and Lake’s team’s results call into question the position that inductive biases are necessary to learn, and they’re already seeing the effects the paper is having on others. “We had one reviewer who a priori didn’t think that this kind of learning was possible,” said Vong. “Seeing our paper flipped their understanding.”

The study by Vong, Lake, and their colleagues is a significant step in understanding the complex dance between language, learning, and cognition. It offers insights into how AI can potentially mimic human learning processes more closely. Through their collaborative effort, this CDS-based team has significantly contributed to our understanding of the intricate workings of both the human mind and AI systems, paving the way for future explorations in the field.

By Stephen Thomas



NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.