14 Feb 2014

Scientists Reveal How the Brain Processes Speech

Speech isn’t just hot air…or is it

There was a time when philosophers believed everything was made from combinations of fire, earth, water and air.

Later, some clever Greek proposed the idea of the atom, and afterwards the theory was refined to include the proton, neutron and electron. Now the men in lab coats are even splitting these into their constituent parts.

And so the story goes with the study of language recognition. Researchers looking into how brains process speech presumed they would respond to the individual sound segments that make up language, known as phonemes. An example of this would be the ‘b’ sound in the word ‘ball’.

But when the researchers hooked patients up to special neural recording devices, they found that their brains analysed even more basic elements of speech than phonemes.

The scientists at University of California San Francisco (UCSF) discovered that patients listening to speech recordings actually focused on distinctive acoustic signatures that the human body makes when a person moves their lips, tongue or vocal cords.

These rudimentary functions of language are what linguists call “features”.

Scientific advancements

The experts at UCSF had a unique opportunity to study the reaction of the brain to speech by placing recording devices on the brains of six patients undergoing epilepsy surgery.

It led to one of the most advanced studies of this kind, as previous research had only been able to record the responses to just a few speech sounds. However, advancements in technology meant the team was able to run 500 unique English sentences spoken by 400 different people. This allowed them to record responses to every kind of speech sound in the English language, a number of times over.

Suffering fricatives

One of the linguistic features mentioned is the sound made by a partially obstructed airway, a result of friction in the vocal tract. This is known as a “fricative” and can be heard in the sounds made by the letters ‘s’, ‘z’ and ‘v’.

Another feature is the burst of air created when mouthing the consonants p, t, k, b and d. This is created by the tongue or lips obstructing air flowing from the lungs, and is called a “plosive”.

The study, published in the journal Science, concentrated on the area of the brain known as the superior temporal gyrus (STG) – where speech sounds are interpreted.

The researchers found that the STGs of the patients “light up” as the participants heard the different speech features. The team found that the brain recognised the “turbulence” created by a fricative, or the “acoustic pattern” of a plosive, rather than individual phonemes such as b or z.

Translated in practical use

A way of looking at the way the brain translates the “shapes” of sounds is to compare it to how humans judge objects visually using shapes and edges.

People can usually recognise an object regardless of the perspective from which it is viewed by tracking edges and shapes and making a correlation in the brain.

Senior author Dr Edward F Chang believes the brain would apply a similar algorithmic process to understanding sound.

“It’s the conjunctions of responses in combination that give you the higher idea of a phoneme as a complete object,” he said.

“By studying all of the speech sounds in English, we found that the brain has a systematic organisation for basic sound feature units, kind of like elements in the periodic table.”

The research could have many applications, including aiding children with reading disorders. A reading disorder occurs when printed words are inaccurately translated by the brain and incorrectly mapped on to speech sounds.

Dr Chang added: “This is a very intriguing glimpse into speech processing.

“The brain regions where speech is processed in the brain had been identified, but no one has really known how that processing happens.”


Sign up to our newsletter

Get our blog articles straight to your inbox.