Extending Diffusion Models to Nonlinear Processes: A Leap Forward for Science and AI

NYU Center for Data Science
3 min readOct 11, 2024

--

Diffusion models have transformed generative AI by enabling the creation of realistic images, videos and molecules, yet they’ve been limited by an inability to handle nonlinear diffusion processes common in the physical sciences. Addressing this gap, CDS Associate Professor of Computer Science and Data Science Rajesh Ranganath and Courant PhD students Raghav Singhal and Mark Goldstein developed a method to extend diffusion models to nonlinear processes.

Their paper, “What’s the score? Automated Denoising Score Matching for Nonlinear Diffusions,” introduces a technique called “local-DSM” (local denoising score matching) that automates the training of nonlinear diffusion processes. This breakthrough extends the capabilities of diffusion models — a type of generative AI that creates data by reversing a gradual corruption process — to a wider range of scientific and practical applications.

“Most of the work that had been done was for linear diffusion processes,” Singhal said. “With the introduction of nonlinearity, the training methods people were using couldn’t be applied. Our work made the training of processes with nonlinear drifts tractable.”

The team’s approach hinges on a clever use of local linearization. By approximating nonlinear functions as linear over small time increments, they’ve created an algorithm that can handle complex systems without requiring manual derivations or specific choices from researchers.

Goldstein emphasized the potential impact: “In active matter research, where you study systems like schools of fish or bacterial colonies, our method could significantly accelerate the analysis of how these entities move and interact.”

“Instead of specifying what noising process to use, you could actually learn it,” Ranganath explained. “This automation aligns with a broader trend in machine learning of reducing the need for handcrafted solutions.”

The implications for practical AI applications are significant. By learning more efficient noising and denoising processes, the method could dramatically reduce the time needed to generate results — potentially cutting wait times from 30 seconds to just five, “which could be the make or break for real, large-scale use cases in applications ranging from material design to video generation,” said Goldstein.

There is evidence for these benefits in the team’s previous work, which automated learning with linear processes. That work showed that new processes can yield models with significant savings in model size and parameter counts, while maintaining performance relative to models built with standard processes. This new research extends those concepts to the nonlinear realm, inspired in part by challenges faced by colleagues studying physical systems, as well as the prospect of building new kinds of generative models.

As AI continues to reshape scientific research and practical applications, innovations like local-DSM demonstrate the power of interdisciplinary collaboration. By bridging the gap between theoretical advancements and real-world problems, Ranganath, Singhal, and Goldstein are paving the way for more efficient, flexible, and powerful AI tools across a broad array of fields.

By Stephen Thomas

--

--

NYU Center for Data Science
NYU Center for Data Science

Written by NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.

No responses yet