Researchers led by Dr. Nima Mesgarani at Columbia University, US, report that the brain deals with speech in a congested room differently depending on how easy it is to hear, and whether we are focusing on it. Published recently in the open-access journal PLOS Biology, the research study uses a mix of neural recordings and computer system modeling to show that when we follow speech that is being drowned out by louder voices, phonetic information is encoded in a different way than in the opposite circumstance. The models revealed that phonetic information of “glimpsed” speech was encoded in both main and secondary acoustic cortex of the brain, and that encoding of the attended speech was boosted in the secondary cortex. Speech encoding occurred later for “masked” speech than for “glimpsed speech.
Example of listening to someone talking in a noisy environment. Credit: Zuckerman Institute, Columbia University (2023) (CC-BY 4.0).
In order to acquire a better understanding of how speech is processed in these circumstances, the scientists at Columbia University tape-recorded neural activity from electrodes implanted in the brains of individuals with epilepsy as they went through brain surgery. The patients were asked to address a single voice, which was in some cases louder than another voice (” glimpsed”) and in some cases quieter (” masked”).
The researchers used the neural recordings to create predictive designs of brain activity. The designs showed that phonetic information of “glimpsed” speech was encoded in both main and secondary auditory cortex of the brain, and that encoding of the attended speech was boosted in the secondary cortex. On the other hand, phonetic information of “masked” speech was just encoded if it was the gone to voice. Speech encoding happened later on for “masked” speech than for “glimpsed speech. Because “glimpsed” and “masked” phonetic details appear to be encoded separately, focusing on figuring out just the “masked” part of participated in speech could result in improved acoustic attention-decoding systems for brain-controlled hearing help.
Vinay Raghavan, the lead author of the study, states, “When listening to someone in a loud place, your brain recovers what you missed out on when the background noise is too loud. Your brain can also capture little bits of speech you arent focused on, however only when the person youre listening to is quiet in contrast.”.
Referral: “Distinct neural encoding of glimpsed and masked speech in multitalker circumstances” by Vinay S Raghavan, James OSullivan, Stephan Bickel, Ashesh D. Mehta and Nima Mesgarani, 6 June 2023, PLOS Biology.DOI: 10.1371/ journal.pbio.3002128.
This work was supported by the National Institutes of Health (NIH), National Institute on Deafness and Other Communication Disorders (NIDCD) (DC014279 to NM). The funders had no role in the study design, information collection and analysis, choice to publish, or preparation of the manuscript.
Scientists have actually found that the brain processes speech in a different way depending upon how distinct it is and whether were concentrating on it. The research study integrates neural recordings and computer modeling, demonstrating that phonetic information is encoded in a different way when a speech is overwhelmed by louder voices compared to when it isnt.
Columbia University scientists have actually discovered that the brain encodes speech differently based upon its clarity and our concentrate on it. This discovery, involving unique processing of “glimpsed” and “masked” speech, might improve the precision of brain-controlled hearing help.
Scientists led by Dr. Nima Mesgarani at Columbia University, US, report that the brain treats speech in a crowded room differently depending on how easy it is to hear, and whether we are focusing on it. Published just recently in the open-access journal PLOS Biology, the research study uses a combination of neural recordings and computer modeling to show that when we follow speech that is being hushed by louder voices, phonetic details is encoded differently than in the opposite circumstance. The findings could help improve listening devices that work by separating attended speech.
Concentrating on speech in a crowded space can be challenging, specifically when other voices are louder. Magnifying all sounds similarly does little to enhance the ability to isolate these hard-to-hear voices, and hearing aids that attempt to only magnify attended speech are still too incorrect for useful use.