New Neuro-Symbolic Modeling Approach Gets Closer to Human-level Concept Learning
Brenden Lake, CDS Assistant Professor of Data Science and Professor of Psychology in the NYU Department of Psychology, recently collaborated with Reuben Feinman, PhD student in the Human & Machine Learning Lab at NYU and a Google PhD Fellow in Computational Neuroscience, on “Learning Task-General Representations with Generative Neuro-Symbolic Modeling”.

The project was presented at ICLR 2021, the ninth International Conference on Learning Representations, by Reuben who is the primary author on the project, on May 5th. ICLR is a conference focused on the advancement of “representation learning”, primarily through an approach commonly referred to as “deep learning.”
Ultimately, the paper explains how people can learn from rich, general-purpose representations from raw perceptual inputs. Current machine learning approaches do not meet these human standards. Symbolic models, though they have the ability to capture compositional and causal knowledge that allows for flexible generalization, struggle to learn from raw inputs and depend on strong abstraction and simplifying assumptions. Neural network models can learn directly from raw data however they struggle to capture compositional and causal structure and generally must be retrained in order to tackle new tasks. What the team has developed to solve this problem is a model that joins these two traditions (symbolic models and neural network models) to capture rich compositional and causal structure, and learn from raw inputs. The generative neuro-symbolic (GNS) model of handwritten character concepts that uses “the control flow of a probabilistic program, coupled with symbolic stroke primitives and a symbolic image renderer, to represent the causal and compositional processes by which characters are formed.”1 Ultimately, they apply their model to the Omniglot challenge — a challenge that requires performing many tasks with a single mode — of human-level concept learning, “using a background set of alphabets to learn an expressive prior distribution over character drawings”.1 In a later assessment, their GNS model uses probabilistic inference to learn rich conceptual representations from a single training image that generalize to four unique tasks, which succeeds where previous work was deficient.
More generally, Brenden and Reuben’s other research focuses on building models capable of solving problems that are easier for people than they are for machines. To learn more about Brenden’s research, please visit his website. To learn more about Reuben’s work, please visit his website.
To read the paper in its entirety, please visit the paper’s arXiv page. For more information on ICLR, please visit the ICLR website.
References:
By Ashley C. McDonald