From Academia to Industry: How a 2018 Paper Foreshadowed OpenAI’s Latest Innovation

NYU Center for Data Science
3 min readOct 16, 2024

--

In the wake of OpenAI’s announcement of their new o1 model, CDS Professor of Computer Science and Data Science Kyunghyun Cho tweeted, “Phew, did you say multiple models, inference time adaptive compute, multimodality and language?” The tweet linked to a paper from 2018, hinting at the foundational research behind these now-commercialized ideas.

In the fast-paced world of artificial intelligence, groundbreaking ideas can sometimes take years to materialize in commercial products. A prime example of this is the connection between OpenAI’s recent o1 model and research published six years ago by academics at CDS.

Back in 2018, recent CDS PhD grad Katrina Drozdov (née Evtimova), Cho, and their colleagues published a paper at ICLR called “Emergent Communication in a Multi-Modal, Multi-Step Referential Game.” This influential work, now cited over 100 times, explored how AI agents could engage in dialogues with “variable conversation length” to solve collaborative tasks.

“We wanted to build a deep learning system that requires variable length of compute at test time in order to produce an outcome,” Drozdov said, reflecting on the work. “The setting of referential games was an ideal test bed for this complex system.”

To do this, the team, which also included NYU Computer Science MS grad Andrew Drozdov (now Research Scientist at Databricks) and former FAIR Research Scientist Douwe Kiela now CEO and founder of Contextual.AI), created a game in which two AI agents must work together to identify an object. One agent sees an image of the object, while the other has access to text descriptions. The agents exchange messages back and forth, with the conversation length varying based on how challenging the particular object is to identify.

Interestingly, much like humans, the AI agents engaged in longer conversations when dealing with more difficult objects. For example, identifying a tiger — with its easy-to-grasp, trademark stripes — required only a brief exchange, while a sloth, with less obviously distinct attributes, necessitated a lengthier dialogue.

This work, while significant, was not the first to address multi-step reasoning during inference. Various inference algorithms had explored similar ideas, and the team’s research built upon this prior foundation. The similarities between this academic research and OpenAI’s new o1 model are nevertheless noteworthy. OpenAI describes o1 as being “trained […] to spend more time thinking through problems before they respond, much like a person would,” and if you’ve used o1 yourself, you know it spends more time on difficult problems than easy ones. This mirrors the adaptive conversation length demonstrated in Drozdov and Cho’s work, where AI agents engaged in longer dialogues when dealing with more difficult objects.

The o1 model’s impressive performance — solving 83% of problems in an International Mathematics Olympiad qualifying exam compared to its predecessor’s 13% — showcases how these academic concepts have been expanded and refined in industry settings.

This development illustrates the complex relationship between academic research and industrial innovation in AI. While universities continue to publish open research, major tech companies often keep their methods proprietary as the field becomes increasingly commercialized.

“You need to have open research in order for the field to continue to grow as a whole,” Drozdov observed, highlighting the importance of knowledge sharing in scientific progress.

The interplay between academia and industry creates a symbiotic ecosystem in AI development. Companies can leverage academic insights to build powerful systems, while academic researchers continue to explore fundamental concepts that may shape future innovations.

However, while the latest models now tend to get a lot of attention — and rightly so — it’s valuable to recognize the academic foundations of these advancements. The connection between Drozdov and Cho’s 2018 paper and OpenAI’s o1 model in 2024 demonstrates how ideas can mature and transform as they move from theoretical exploration to practical application.

By tracing these connections, we gain insight into the collaborative nature of AI progress. It underscores the importance of fostering an environment where academic research and industry innovation can continue to inform and inspire each other, driving the field forward in tandem.

By Stephen Thomas

--

--

NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.