New Neuro-Symbolic Modeling Approach Gets Closer to Human-level Concept Learning

NYU Center for Data Science
2 min readMay 28, 2021

--

Brenden Lake, CDS Assistant Professor of Data Science and Professor of Psychology in the NYU Department of Psychology, recently collaborated with Reuben Feinman, PhD student in the Human & Machine Learning Lab at NYU and a Google PhD Fellow in Computational Neuroscience, on “Learning Task-General Representations with Generative Neuro-Symbolic Modeling”.

The project was presented at ICLR 2021, the ninth International Conference on Learning Representations, by Reuben who is the primary author on the project, on May 5th. ICLR is a conference focused on the advancement of “representation learning”, primarily through an approach commonly referred to as “deep learning.”

Ultimately, the paper explains how people can learn from rich, general-purpose representations from raw perceptual inputs. Current machine learning approaches do not meet these human standards. Symbolic models, though they have the ability to capture compositional and causal knowledge that allows for flexible generalization, struggle to learn from raw inputs and depend on strong abstraction and simplifying assumptions. Neural network models can learn directly from raw data however they struggle to capture compositional and causal structure and generally must be retrained in order to tackle new tasks. What the team has developed to solve this problem is a model that joins these two traditions (symbolic models and neural network models) to capture rich compositional and causal structure, and learn from raw inputs. The generative neuro-symbolic (GNS) model of handwritten character concepts that uses “the control flow of a probabilistic program, coupled with symbolic stroke primitives and a symbolic image renderer, to represent the causal and compositional processes by which characters are formed.”1 Ultimately, they apply their model to the Omniglot challenge — a challenge that requires performing many tasks with a single mode — of human-level concept learning, “using a background set of alphabets to learn an expressive prior distribution over character drawings”.1 In a later assessment, their GNS model uses probabilistic inference to learn rich conceptual representations from a single training image that generalize to four unique tasks, which succeeds where previous work was deficient.

More generally, Brenden and Reuben’s other research focuses on building models capable of solving problems that are easier for people than they are for machines. To learn more about Brenden’s research, please visit his website. To learn more about Reuben’s work, please visit his website.

To read the paper in its entirety, please visit the paper’s arXiv page. For more information on ICLR, please visit the ICLR website.

References:

  1. Learning Task-General Representations with Generative Neuro-Symbolic Modeling

By Ashley C. McDonald

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

NYU Center for Data Science
NYU Center for Data Science

Written by NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.

No responses yet

Write a response