Learning to understand and play music shapes the brain. A new study from New York University (NYU) finds that analyzing how brain rhythms are used in processing music could help researchers better understand auditory processing overall. The study builds on previous research demonstrating that brain rhythms synchronize with speech. This allows the brain to parse what it hears into smaller chunks, like individual syllables and words. The NYU researchers investigated how these cortical brain rhythms, also called oscillations, are involved in processing other types of sounds.
The researchers conducted three experiments using magnetoencephalography (MEG), which tracks brain activity by measuring the magnetic fields the brain generates. The participants were divided into two groups based on musical ability. The musician group consisted of musicians who were currently practicing music and who had six or more years of musical training. The non-musicians were people with fewer than two years of musical training who were not practicing music. Both groups listened to 13-second clips of classical music by Back, Beethoven, and Brahms. They listened for short pitch distortions in the clips, the tempo of which varied from half a note to eight notes per second.
Musicians and non-musicians demonstrated similar cortical oscillations when listening to music clips with a tempo of one note per second or faster. This indicates that all the participants’ brains processed the sounds effectively, even though the musicians were more synchronized to the music.
For music with a slow tempo, only the musicians’ brains synchronized with the clips. This suggests that non-musicians are not able to process music as a continuous melody. The study also revealed that the musicians were more accurate at detecting pitch distortions.
The results demonstrate that cortical oscillations enhance perception of music and pitch changes. The findings suggest that brain rhythms are involved in parsing and grouping sounds into chunks that are analyzed as speech or music.
“What this shows is we can be trained, in effect, to make more efficient use of our auditory-detection systems. Musicians, through their experience, are simply better at this type of processing,” states study co-author David Poppel, NYU professor and director of the Max Planck Institute for Empirical Aesthetics in Frankfurt.
This research is published in the Proceedings of the National Academy of Sciences.
Previous news in music: