How long does it take to decide if you like a song?
A study led by Clinical Associate Professor of Data Science and Psychology Pascal Wallisch finds it only takes a few seconds, offering new insights into cognitive processing research
Major music platforms like iTunes play snippets of songs to entice customer downloads and purchases, but can listeners successfully determine if they like a song within such a brief window of time? A new study “The Whole is Not Different From its Parts: Music Excerpts are Representative of Songs” found it only takes seconds to decide whether we like or dislike a song. Led by Clinical Associate Professor of Data Science and Psychology Pascal Wallisch, the research shows that the experience of a segment of an artist’s musical work can be representative of the whole.
The study’s research team included Sara Philibotte, Stephen Spivack, Nathaniel Spilka, and Ian Passman — all NYU undergraduates during the study. The researchers enrolled 643 NYU students and New York City residents to listen to 3,120 clips of varying lengths from a selection of 260 songs over the course of a two-hour session. The part of the song sampled was varied, as was the musical genre. With country, jazz, hip-hop, and more playing from their headphones, the participants rated their impression of a song or clip on a scale from ‘‘Hate it’’ to ‘‘Love it”. In addition, they ranked if they had listened to the song before from “Never’’ to ‘‘Too many to count’’. The results showed that ratings of clips and songs were strongly aligned. In other words, whether someone likes an entire song could be predicted from how they like a clip from the song, even if it is very brief.
With the recent publication of the study in the February issue of the journal Music Perception, CDS spoke with Pascal about the research process, how data science was used, and the last song he heard that he loved within seconds.
Where did your interest in musical taste prediction stem from, and where did the research start?
Our interest was based in the concept of representativeness. As songs are so long — on average about 3 minutes and 30 seconds, only a few dozen songs can be played to a given participant during an experimental session studying music. This raises serious concerns about whether these few songs are representative of “music” (with over a hundred million tracks in existence). Practically, music psychologists typically use short excerpts when doing music research. That way, many different clips can be used in any given study. To our surprise, no one had ever checked whether the response to the clips is representative of the response to the whole song! This is a concern, as it might not be — people usually listen to whole songs, not brief excerpts. As the validity of much of music research rests on the assumption that the response to brief clips is similar to that of whole songs, we just had to check that assumption. There are plenty of reasons to doubt it a priori. For instance, film trailers can be notoriously misleading — with people liking the trailer but disliking the movie, and vice-versa.
How did you leverage data science in this research?
I’m so glad you asked. We used fundamental principles of data science throughout the study. Importantly, we used a well-powered sample of participants and songs. Practically speaking, the paper is a tour-de-force of essential data science techniques: Tests of normality (as we can’t just assume normality), Spearman correlations (as we use ratings data), logistic regression, mean absolute deviations, Kolmogorov-Smirnov tests, Mann-Whitney U tests, permutation tests, root-mean-squared errors and bootstrapped confidence intervals are all properly used.
What variables could potentially play into the participants’ rating of various songs?
Honestly, we are increasingly convinced that how someone responds to a given song has relatively little to do with its acoustical properties per se, and more with the emotions evoked by them, and which emotions are evoked might come down to the associations one has with a song. So there is a lot going on!
What insights does this research offer into the study of cognitive processing?
We believe it offers a fascinating window into what music *is*, from a psychological perspective. It seems that music is much like an auditory texture — like the sound made by a crackling fireplace or howling wind, just more complex. This could explain why songs are so short — compared to the average length of a movie, for instance (about 90 minutes). In this sense, a song is basically cognitive bubblegum. The idea is that it takes just a few seconds to recognize its flavor, but that it gets stale relatively fast.
What questions or areas of exploration has the work left you with?
So many! One question that arises immediately is if 5 seconds is already enough — the shortest clip duration we tested, as it is the shortest clip duration commonly used in music research — “how low can you go”? Are 2 seconds enough? 1? And if it is so low, what drives this snap judgment? At some point, it gets shorter than a chord progression. We are currently exploring this question in a “rise of the kernel” follow-up study. Another question is what aspects of the song stabilize the judgment — it probably boils down to some long-range correlations in the temporal substructure of the song, but we don’t know for sure. Finally, it could also be interesting to find individual songs and individuals that defy this typical pattern — and what makes them special.
What was the last song you heard you decided you liked in under five seconds? And what was the last song you disliked in under five seconds?
Liked: S.P.Q.R — Epic Roman Music
Disliked: Let’s keep it positive, but one thing we found in another study — which we haven’t published yet — is that disliking something is actually even faster than liking it. And this emotional snap-judgment precedes cognition, which I found very interesting.
Any additional thoughts you’d like to share?
Data science is awesome!
By Meryl Phair