Memory Mosaics, a New Transparent AI Model, Illustrates How Transformers Work

NYU Center for Data Science
2 min readAug 30, 2024

--

Large language models have revolutionized artificial intelligence, but their inner workings remain a black box. Now, a team of researchers has developed a new architecture that sheds light on how these models learn and process information.

CDS PhD student Jianyu Zhang, along with Leon Bottou and other colleagues from Meta, introduced “Memory Mosaics” in a recent paper. This novel approach deconstructs the complex mechanisms of transformer models into simpler, more transparent components.

“The purpose of this work is not to create a new architecture, but to understand how current learning systems work,” Zhang said. By simplifying the structure and making each component explicit, the researchers aimed to demystify the learning process of transformers.

Memory Mosaics replace both the traditional key-value-query attention and feed-forward neural network block in transformers with a simpler key-value associative memory structure. This change allows for a more intuitive understanding of how the model processes and retrieves information. Zhang explained, “This learning system is simply a stack of associative memories, and thus it is much easier to understand.”

The team’s work also revealed a phenomenon they call “predictive disentanglement.” This principle explains how the learning process breaks down complex tasks into simpler, independent components. Zhang likened this to tracking the positions of three moons in a three-moons-one-earth planetary system: “When we disentangle different moons during tracking, we learn fast.”

Another promising aspect of Memory Mosaics is their potential for improved in-context learning. Traditional transformers, RNNs, and state-space models struggle to gain in-context learning abilities after training on several thousands to tens of thousands tasks. Memory Mosaics, however, can assimilate new information after training on only one hundred tasks. “This efficient in-context learning ability might open a new learning paradigm,” Zhang explained. “For example, to teach a model a new skill, such as Python, one could simply input a whole Python textbook at inference time, instead of fine-tuning model weights on the book.”

While the immediate applications of Memory Mosaics remain in the realm of research, the insights gained could have far-reaching implications for the development of more efficient and interpretable AI systems. As Zhang and his colleagues continue to refine their model and scale up the model to billions of parameters, they hope to bridge the gap between the impressive capabilities of modern learning systems and our understanding of how they work.

By Stephen Thomas

--

--

NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.