Meta-Learning for Compositionality: A Path to More Human-like Learning in Machines

NYU Center for Data Science
5 min readDec 6, 2023

Artificial Intelligence (AI) has long aspired to mimic the unique human capacity for understanding and creating novel combinations from known elements. This quest takes a significant step forward with CDS Assistant Professor of Psychology and Data Science Brenden Lake and ICREA research professor Marco Baroni’s pioneering research, which introduces Meta-Learning for Compositionality (MLC). The paper is called “Human-like systematic generalization through a meta-learning neural network,” and has been published in Nature. MLC empowers neural networks with systematic compositionality, which is the ability to learn new words and use them in rule-like combinations with other words. This is a cornerstone of human thought and language, and creating an AI model that achieves parity with humans addresses a critical challenge in AI, is a step towards resolving a decades-old debate within the field, and may help us understand the mechanisms with which humans perform this same feat.

Compositionality is the idea that the meaning of a complex expression is determined by its parts and the rules for combining them. This principle allows humans to understand and create a wide array of complex expressions from a limited set of elements. For example, understanding the meanings of words like “red,” “ball,” and “threw,” along with grammar rules, enables us to comprehend the sentence, “The boy threw the red ball.” Compositionality enables the creation and understanding of countless new sentences using a finite vocabulary, a key feature of human language.

AI models have struggled with this. Even as overall language generation capabilities have surged in recent years, compositionality has remained elusive. Even GPT-4 and similar state-of-the-art models tend to approach language as a pattern recognition problem rather than genuinely understanding compositionality. This can lead to issues where the model generates plausible-sounding language that is nonsensical or contextually inappropriate.

Lake and Baroni’s Nature paper represents a significant advancement in addressing the challenge of compositionality in AI. Lake and Baroni demonstrate that neural networks can achieve human-like systematic generalization when optimized for their compositional skills. By guiding the training of neural networks through a dynamic stream of compositional tasks, Lake and Baroni’s MLC approach allows AI systems to learn how to combine known components in novel ways effectively.

MLC’s innovation lies in its unique training methodology. It is a way of optimizing a neural network through a series of dynamic, compositional tasks, akin to the human learning process. This novel approach employs high-level guidance and real-world examples, enabling the network to develop essential learning skills through meta-learning.

Using a standard transformer architecture, MLC has achieved significant success in machine learning benchmarks like SCAN and COGS, focusing on the systematic lexical generalization of new words and combinations. MLC significantly outperforms more standard neural networks, showcasing near-perfect accuracy and reflecting nuanced human learning behavior​​.

Lake and Baroni’s breakthrough also has ramifications for a decades-old debate in AI, between symbolic AI and neural nets. Symbolic AI is a kind of AI in which each rule of cognition is, theoretically, explicitly represented — requiring each new rule to be discovered and tested individually, and to be hand-coded in some cases. It’s not talked about as much anymore, especially since neural nets — rebranded in the 2010s as ‘deep learning’ and the basis of all modern LLMs like ChatGPT — have produced such dramatic results. But there was still a strong case that compositionality was better handled by symbolic AI.

“This is something that symbolic AI has prided itself as a tremendous strength, because you don’t need to think about where your systematic compositionality is going to come from — in symbolic AI systems, it’s built in,” said Lake. “But [compositionality] is not something that falls out naturally from the architecture of neural network models.”

Lake and Baroni’s paper, for the first time, demonstrates that neural networks, when optimized through MLC, can indeed begin closing the gap with symbolic AI in respect to systematic generalization. This breakthrough meets one lingering criticism of neural nets, and brings us one step closer to the triumph of the paradigm.

However, Lake acknowledges the complexities in this comparison, especially in relation to humans, noting that the current model’s skill acquisition, through extensive task-specific practice, differs from humans’ more varied learning experiences. He points out that human cognition is not purely systematic, often influenced by biases reflecting natural language structures, a nuance that neither traditional symbolic models nor traditional neural networks fully capture.

Lake is optimistic that this research will help us understand human cognition better. The underlying cognitive mechanisms that enable compositionality in humans are not fully understood, and Lake’s research provides a model through which the cognitive processes of humans can be examined. By studying how these AI systems develop and apply compositional rules, researchers can gain insights into the potential processes that human brains might use for similar tasks. For instance, observing how an AI system generalizes rules from limited examples can shed light on how humans can quickly learn and apply new linguistic structures or vocabularies. Furthermore, the MLC model’s ability to navigate between the rigid application of rules and flexible language use might be able to provide a framework for studying how human cognition achieves this balance — human language is not just about applying rules systematically; it also involves understanding and adapting to context, using idiomatic expressions, and sometimes even breaking rules for effect or clarity.

Lake and Baroni’s paper has been widely covered by science and technology news outlets. Nature itself published a piece, “AI ‘breakthrough’: neural net has human-like ability to generalize language”, about the paper, and sources like Scientific American, Live Science, and El Pais have all published pieces on it.

Lake and Baroni’s research marks a significant step in AI’s journey towards human-like learning and adaptability. By addressing the challenge of systematic compositionality, MLC opens new avenues in AI research and application, bridging the gap between artificial and natural intelligence. “These goals are really intertwined,” says Lake. “We want to understand how this essential aspect of human intelligence works.” This work not only propels AI forward but also offers deeper insights into the complex workings of the human mind, perhaps helping to bring us towards a future where AI understands and collaborates with us more effectively.

By Stephen Thomas



NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.