What is a Goal? Advancing Machine Agency While Understanding Human Goal Creation
Even toddlers can effortlessly invent complex goals and games, yet we don’t have computer models for understanding this core human ability. New research from CDS PhD student Guy Davidson, CDS Associate Professor Brenden Lake, CDS-affiliated Professor Todd M. Gureckis, as well as NYU Computer Science PhD student Graham Todd and NYU Computer Science Professor Julian Togelius now bridges this gap by introducing a framework that explains how humans generate goals as if they were creating tiny, executable computer programs.
In their paper, “Goals as reward-producing programs,” recently published in Nature Machine Intelligence, the authors described how they gathered data from participants tasked with inventing single-player games in a virtual room filled with everyday objects. Participants spontaneously created diverse goals — such as stacking blocks in elaborate configurations or bouncing balls off walls and into bins — which were then translated into a structured programming language.
“Humans have an innate capacity to craft intricate and novel objectives,” said Davidson, who co-led the paper with Graham Todd. “What we’ve done is provide a computational model that attempts to mimic this process, allowing machines to generate goals that humans find understandable and fun.”
Central to the research was the insight that human goals are fundamentally compositional, meaning people naturally combine simple actions into more elaborate tasks. This allowed the researchers to build a model capable of learning from human-generated examples and creating new, equally intricate goals by combining and recombining these basic components.
Evaluations by human raters found that the goals generated by the researchers’ model were often indistinguishable from those created by actual participants. These model-generated games scored highly on attributes like creativity and human-likeness, though not every program succeeded equally.
Davidson explained one of the study’s intriguing findings: “We noticed that the most human-like and enjoyable goals are coherent — meaning their different parts make intuitive sense together. Games generated by the model sometimes struggled with this coherence if the individual components didn’t naturally fit.”
Beyond games, this research may inform the development of AI that can independently set and pursue objectives in real-world scenarios. Davidson and colleagues plan to explore how their framework could empower artificial agents to adaptively create goals for complex environments, potentially moving AI closer to the flexible, creative thinking characteristic of human intelligence.
By Stephen Thomas