Timing is essential in human language, and scientists have only now discovered which part of the brain is responsible with helping us make sense of the human language by making use of the speech rhythm integration.
According to researchers from Duke University, in order for a human to understand what other people are saying, the infant brain needs to learn to take cues and interpret how various time signatures apply in language.
Speech time measurements are divided in three categories, with the phonemes being the shortest one of them – they take somewhere between 30 and 60 milliseconds to pronounce.
In the second category we have syllables – 200 to 300 milliseconds, and lastly, there are words in their entirety, which can take up to seconds in pronunciation.
In order to process this information, the auditory system built in the brain is thought to discern it in “chunks” the equivalent of a syllable.
Duke University researchers conducted a new study which involved rearranging recordings of foreign speech in fragments lasting between 30 and 660 milliseconds. Using a special algorithm, researchers were able to create what they called “speech quilts.”
When participants were asked to listen to these fragments while their brains were monitored, the team realized that shorter speech quilts translated into higher disruption to the speech’s initial structure.
The region of the brain that automatically became active on the scans is called the superior temporal sulcus (STS), and during the 480- and 960-millisecond speech quilts lit up brightly. The fewest brain activity was noticed while listening to the 30-millisecond quilts.
Tobias Overath, assistant research professor of neuroscience and psychology at Duke and one of the scientists involved in the study, explained how the team immediately knew they were about to discover something. “That was pretty exciting.”
The function of STS is to find ways of integrating auditory and sensory information in general. Never before could scientists prove it responds to time units in speech.
In order to make sure their findings are accurate, researchers ran some control tests with sounds that only mimicked spoken language. These fragments were also rearranged by the algorithm into quilts, but scientists could clearly observe that the control quilts did not get any reaction from the subjects’ brains.
Researchers published their discovery in the journal Nature Neuroscience, after they made sure that the STS effect was caused by speech-specific brain processing, and not by different sound pitches or by the brain’s reaction to natural and computer-generated sounds.
Image Source: DesignCom