Research Spotlight: CDS PhD Student Sreyas Mohan on Unsupervised Deep Video Denoising

NYU Center for Data Science
2 min readSep 16, 2021
The results of the Unsupervised Deep Video Denoising on a snowboarding video!

When you take videos with a camera, like the one in your smartphone, there are often random voltage fluctuations in the light sensors. This introduces noise, making the videos look grainy. It can often be hard to extract information from a noisy video. Whether you’re taking raw footage in the dark or using fluorescence microscopy, noise detracts from the video quality causing it to be much harder to decipher the filmed data. Currently, state-of-the-art video denoising is achieved by deep neural networks, and they are typically trained to map noisy videos to clean videos. However, in many applications (i.e., microscopy), we can’t capture clean videos, making it difficult to train such video denoisers.

To combat this problem, CDS Ph.D. student Sreyas Mohan, alongside his colleagues Dev Yashpal Sheth, a visiting student at CDS, Joshua L. Vincent, Ramon Manzorro, Peter A. Crozier, Mitesh M. Khapra, associated CDS Associate Professor Eero P. Simoncelli, and CDS Professor Carlos Fernandez-Granda, have co-authored “Unsupervised Deep Video Denoising.” The paper was recently accepted into the International Conference on Computer Vision (ICCV 2021), a highly regarded research conference sponsored by the Institute of Electrical and Electronics Engineers.

Their work proposed an unsupervised model — the goal was to design a system that would learn to denoise exclusively from noisy videos, without ever needing the clean versions. The datasets used to test the model included four types of videos: natural videos, raw videos, electron microscopy, and fluorescence microscopy. As they began to test their model, their results were astounding. Their Unsupervised Deep Video Denoising method using convolutional neural networks is at par with, and sometimes outperforms several state-of-the-art methods trained with supervision.

The team was curious to see how this model was able to achieve state-of-the-art denoising learning exclusively from noisy videos. They wanted to know what kind of strategies the model was implementing for denoising. Taking a deeper look, the team realized that their model denoises by averaging over pixels in similar areas in different noisy frames of the video. This means that the model automatically learned to track motion of different objects in the video!

With a model comparable to state-of-the-art supervised learning approaches, the implications for this method are tremendous. The Unsupervised Deep Video Denoising method offers a ubiquitous approach to denoising all sorts of videos, whether it’s a pretty cool shot of you doing your latest skateboarding trick or identifying nanoparticles under an electron microscope.

If you’re interested in learning more, please visit their Github repository or read their research at arxiv.org.

By Keerthana Manivasakan

--

--

NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.