Eric Oermann: Data Science of the Brain

This entry is a part of the NYU Center for Data Science blog’s recurring guest editorial series. Eric Oermann is a CDS affiliated professor..

I spend a lot of my time thinking about the human brain in my daily work as a neurosurgeon at NYU Langone Health, as well as the computer hardware in our artificial intelligence lab (the OLAB) as a scientific investigator in the NYU Department of Neurosurgery, Radiology, and the NYU Center for Data Science. The human brain is particularly fascinating to me when compared to the deep learning tools that my lab uses on a daily basis because it is just so different! Although much of my lab’s work is focused on ways of protecting the human brain from illness1–3, increasingly I find my interests turning to better ways of understanding the human brain or recapitulating it in silico.

At the risk of oversimplifying, consider for a moment the physical differences between Greene (NYU’s new supercomputer) and an average person. The difference in energy consumption for a full day of work is staggering (credit owed to Nicholas Bostrom for first doing this popular math). For ease of comparison, I have reduce all computations to my standard and preferred unit of human energy, Chipotle burritos, as well as made some simplifications. Leaving out the human body, or the facilities to support Greene, we are left with comparing the average brain to a computing platform consisting of around 32,000 Xeon 8268 cores, 295x RTX8000s, and 250x V100s. Per my back-of-the-envelope calculations, at max load for one-half day of work Greene consumes roughly 11.66 MW which translates into roughly 12,000 burritos. The average brain on the other hand runs on about 350–400 calories per day, or one burrito (technically a half-burrito, but I find it hard to leave behind half of a burrito). This wouldn’t be particularly impressive if Greene exceeded the average brain at 12,000x performance, but as it happens there are many tasks (particularly ones involving multi-sensory integration and advanced planning) that Greene struggles with. To be fair, there are many tasks Greene excels at that the average brain would also struggle with, but the point is that the differences in energy consumption are astounding and point towards some fundamental differences in how the compute is being performed.

The learning mechanisms themselves, and how long they take, also make for a compelling comparison (studied in depth by my colleague Brendan Lake and Emin Orhan as discussed in an early post). Humans learn in a largely unsupervised manner, as compared to the multitude of ways of training a deep neural network. On the one hand, newborn infants can take years until they are able to perform basic tasks (object permanence doesn’t start to set in until about 6 months of age and is fully realized at 24 months). Adult humans, on the other hand, show a remarkable capacity for solving problems that they have never encountered before (zero-shot learning). Facilitating these learning mechanisms, in people, is a dizzying array of neurons, support cells, neurotransmitters, and other mechanisms for computation well beyond the simple mechanisms our computing platforms have access to in silico. One of the projects in the OLAB in collaboration with Dr. Biyu He’s Perception and Brain Dynamics Laboratory is to study these computational mechanisms and try to recreate them in silico to help diagnose mental illness.

For me, all of these observations are incredibly fascinating, and it comes full circle to human health and the patients that I encounter every day in the Department of Neurosurgery. Many of the clinical problems that we face are challenging to tackle with AI due to being non-randomly noisy, data scarce, high dimensional problems. However, there is a certain tidiness in looking for answers to caring for the brain by studying the brain itself. Although these medical problems in data science are hard, the ability of our brains to tackle them so well and their potential to impact human health, are precisely what makes them such great problems too!

By Eric Oermann

REFERENCES

1. Titano, J. J. et al. Automated deep-neural-network surveillance of cranial images for acute neurologic events. Nat. Med. 24, 1337–1341 (2018).

2. Kwon, Y. J. (fred) et al. Combining initial radiographs and clinical variables improves deep learning prognostication of patients with COVID-19 from the emergency department. Radiology: Artificial Intelligence e200098 (2020).

3. Oermann, E. K. et al. Using a Machine Learning Approach to Predict Outcomes after Radiosurgery for Cerebral Arteriovenous Malformations. Sci. Rep. 6, 21161 (2016).

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store