For the very first time, MIT neuroscientists have actually recognized a population of nerve cells in the human brain that illuminate when you hear singing, but not other kinds of music. Credit: iStockphoto, modified by MIT News
MIT neuroscientists have identified a population of nerve cells in the human brain that react to singing however not other kinds of music.
For the very first time, MIT neuroscientists have determined a population of neurons in the human brain that illuminate when we hear singing, however not other kinds of music.
These neurons, discovered in the acoustic cortex, appear to react to the specific mix of voice and music, however not to either routine speech or critical music. Exactly what they are doing is unidentified and will need more work to uncover, the scientists state.
“Most of the kind of data we can collect can inform us that heres a piece of brain that does something, however thats pretty limited. During that time, if patients agree, they can take part in studies that include measuring their brain activity while performing specific jobs. For this research study, the MIT team was able to collect data from 15 individuals over several years.
For those participants, the scientists played the same set of 165 sounds that they utilized in the earlier fMRI study. Utilizing an unique statistical analysis that they established, the scientists were able to infer the types of neural populations that produced the data that were taped by each electrode.
” The work supplies proof for fairly fine-grained partition of function within the auditory cortex, in a method that lines up with an intuitive distinction within music,” says Sam Norman-Haignere, a previous MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical.
The work constructs on a 2015 study in which the very same research study group used practical magnetic resonance imaging (fMRI) to identify a population of neurons in the brains auditory cortex that responds particularly to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which offered them far more accurate details than fMRI.
” Theres one population of nerve cells that responds to singing, and then very nearby is another population of neurons that reacts broadly to lots of music. At the scale of fMRI, theyre so close that you cant disentangle them, but with intracranial recordings, we get extra resolution, and thats what we think allowed us to pick them apart,” states Norman-Haignere.
Norman-Haignere is the lead author of the research study, which was released on February 22, 2022, in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MITs McGovern Institute for Brain Research and Center for Machines, brains and minds (CBMM), are the senior authors of the research study.
In their 2015 research study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, consisting of various kinds of speech and music, as well as daily noises such as finger tapping or a dog barking. For that research study, the scientists designed an unique approach of examining the fMRI information, which permitted them to recognize 6 neural populations with different reaction patterns, consisting of the music-selective population and another population that responds selectively to speech.
In the brand-new research study, the scientists hoped to acquire higher-resolution data utilizing a strategy called electrocorticography (ECoG), which allows electrical activity to be taped by electrodes positioned inside the skull. This offers a much more precise picture of electrical activity in the brain compared to fMRI, which measures blood flow in the brain as a proxy of nerve cell activity.
” With many of the approaches in human cognitive neuroscience, you cant see the neural representations,” Kanwisher says. “Most of the kind of data we can collect can inform us that heres a piece of brain that does something, but thats pretty limited. We would like to know whats represented therein.”
During that time, if patients concur, they can take part in research studies that involve measuring their brain activity while carrying out particular tasks. For this research study, the MIT group was able to gather data from 15 individuals over a number of years.
For those participants, the researchers played the very same set of 165 noises that they used in the earlier fMRI research study. The place of each patients electrodes was identified by their surgeons, so some did not get any actions to acoustic input, but lots of did. Utilizing an unique statistical analysis that they established, the scientists were able to infer the types of neural populations that produced the data that were recorded by each electrode.
” When we applied this approach to this information set, this neural response pattern popped out that just reacted to singing,” Norman-Haignere states. “This was a finding we truly didnt expect, so it quite validates the entire point of the approach, which is to reveal potentially novel things you might not believe to search for.”
That song-specific population of neurons had extremely weak reactions to either speech or crucial music, and therefore stands out from the music- and speech-selective populations recognized in their 2015 study.
Music in the brain
In the 2nd part of their research study, the scientists created a mathematical approach to combine the information from the intracranial recordings with the fMRI information from their 2015 study. Due to the fact that fMRI can cover a much bigger portion of the brain, this allowed them to identify more exactly the places of the neural populations that react to singing.
” This method of integrating ECoG and fMRI is a significant methodological advance,” McDermott states. “A lot of people have been doing ECoG over the previous 10 or 15 years, however its always been limited by this problem of the sparsity of the recordings. Sam is actually the first person who determined how to integrate the improved resolution of the electrode recordings with fMRI data to improve localization of the overall reactions.”
The song-specific hotspot that they discovered is situated at the top of the temporal lobe, near areas that are selective for language and music. That place recommends that the song-specific population may be reacting to functions such as the viewed pitch, or the interaction between words and viewed pitch, prior to sending out details to other parts of the brain for more processing, the scientists state.
The researchers now hope to discover more about what elements of singing drive the reactions of these neurons. They are also working with MIT Professor Rebecca Saxes lab to study whether infants have music-selective locations, in hopes of finding out more about when and how these brain regions develop.
Recommendation: “A neural population selective for tune in human auditory cortex” by Sam V. Norman-Haignere, Jenelle Feather, Dana Boebinger, Peter Brunner. Anthony Ritaccio, Josh H. McDermott, Gerwin Schalk and Nancy Kanwisher, 22 February 2022, Current Biology.DOI: 10.1016/ j.cub.2022.01.069.
The research was funded by the National Institutes of Health, the U.S. Army Research Office, the National Science Foundation, the NSF Science and Technology Center for Machines, brains, and minds, the Fondazione Neurone, and the Howard Hughes Medical Institute.