Pioneering the Beat: CDS’ Data Science Innovators Decode Music Perception at Neuroscience Conference
The essence of music’s allure may finally be deciphered, not by musicians, but by data scientists. Tayyibah Khanam and Harsh Asrani, two data science MS students at CDS, showcased their groundbreaking study, “What moves us: The impact of auditory features on moment-to-moment music appreciation,” at this year’s Society for Neuroscience meeting in Washington DC, unraveling how we perceive music in real-time.
Imagine a world where music streaming services know exactly what you want to hear before you do. That’s the future Khanam and Asrani are helping to create. Their research, co-authored with Clinical Associate Professor of Data Science and Psychology Pascal Wallisch and Stephen Spivack (MS DS ‘23), offers fresh insights into the moment-to-moment impact of auditory features on music appreciation.
The study, which investigated the relationship between auditory features of music and moment-to-moment music appreciation, analyzed data from 643 participants who had listened to 12 full songs and indicated appraisal in real time. The researchers found that a variety of auditory features, including loudness, tempo, and frequency range, were associated with varying experiences of appreciation. For example, higher loudness, faster tempo, and lower frequency ranges were generally associated with higher appraisals. However, the researchers also found that the effects of auditory features were complex and varied depending on the individual listener.
Khanam eloquently describes their study as an attempt to “understand how different people perceive different music.” Unlike previous studies, which averaged the ratings of songs across listeners, their approach was novel. “We were the first ones to actually make this number of people sit inside a lab […] and respond at every given moment of time while they’re listening to that song,” Khanam explains, referring to their 643 participants — “a huge sample size as compared to other music studies.” The meticulous in-lab study offered an unusually reliable and nuanced view of musical perception.
Their investigation led to the identification of four unique listener types — “Drifters,” “Jumpers,” “Gymnasts,” and “Sitters” — based on real-time response styles. Khanam noted, “Instead of averaging people together, we focused on seeing what drives different humans.” The significance of these findings lies in their potential application, particularly in refining music recommendation systems and even in the burgeoning field of AI-generated music.
Asrani emphasizes the thoroughness of their data-gathering process, hailing it as “unprecedented for an in-house study of this kind.” The duo didn’t stop at mere data analysis; they also explored the correlation between personality traits and music preferences, although a full analysis of this aspect is pending. Their poster, vividly illustrating their findings, proved popular at the conference, drawing a substantial audience and fostering interactive discussions — a testament to the compelling nature of their work. Khanam reflects on the advantage of presenting research in the form of a poster at a conference before writing it up as a paper: “We had over 75 attendees providing immediate feedback.” The collaborative environment provided them with invaluable insights to refine their research further.
The motivation behind their work is rooted in a dataset made available by Wallisch’s Fox lab at NYU. “It’s a great opportunity to now use it,” said Khanam. The duo’s initiative is not an assignment but a self-directed exploration, showcasing their dedication and curiosity.
Khanam and Asrani’s work stands out not just for its innovative approach to studying music perception but also for its potential real-world applications. Their study promises to influence how we interact with music, potentially transforming our listening experiences by tailoring them to our unique perceptual patterns.
By Stephen Thomas