Learning dialects at a young age can shape the way a person’s brain processes languages, a study of the use of pitch accents among Japanese speakers shows.
When we hear language, our brain breaks down the sounds to extract meaning.
But as most people will have discovered, two people who speak the same language may have trouble understanding each other due to the differences in tone and intonation that pepper regional accents.
And in some languages, such as Japanese, such regional differences are more pronounced than an accent and are called dialects.
Researchers at the Riken Brain Institute in Tokyo wanted to see if they could find out whether speakers of a non-standard dialect used the same areas of the brain while listening to spoken words as the native speakers of the standard dialect or as someone who acquired a second language later in life.
Doctors Yutaka Sato, Reiko Mazuka and team were interested in finding out whether learning dialects can shape the brain hemispheres that process spoken language.
To do this, they tested study participants’ responses when they distinguished three types of word pairs measured across pitch, intonation and tone.
In Japanese, pitch accents are a particularly important part of the language. While different languages obviously usually have significant differences in vocabulary and grammar, the dialects of an individual language typically differ at the level of sounds and pronunciation.
Japan’s standard dialect uses a pitch accent to distinguish identical words that have different meanings. But there are other regional dialects that do not.
A pitch accent sees speakers use pitch to give emphasis to a syllable or mora within a word so that they can communicate the different meanings of identical words.
It’s similar to the way that stress in an English word can change its meaning – think of the difference between “pro’duce” and “produ’ce” (i.e. the word for agricultural products and the word for making or presenting something).
In Japan, the syllables of a word can have either a high or a low pitch and the combination of pitch-accents for a particular word imparts it with different meanings.
The Riken team used advanced imaging to visualise brain areas used for understanding language in native Japanese speakers.
It is known that pitch changes activate both brain hemispheres, whereas word meaning is preferentially associated with the left hemisphere.
In the study, published in the journal Brain and Language, participants’ responses to the following Japanese word pairs were measured:
Words such as /ame’/ (candy) versus /kame/ (jar) that differ in one sound
Words such as /ame’/ (candy) versus /a’me/ (rain) that differ in their pitch accent
Words such as ‘ame’ (candy in declarative intonation) and /ame?/ (candy in a question intonation)
Using a process called Near Infrared Spectroscopy, the team were able to examine whether the two brain hemispheres are activated differently in response to pitch changes embedded in a pair of words in standard and accent-less dialect speakers.
When the participants heard the word pair that was different in pitch accent – /ame’/ (candy) versus /a’me/ (rain), the left hemisphere of the brain was predominantly activated in standard dialect speakers.
But among accent-less dialect speakers there left side of the brain was not as activated. The researchers said it shows that standard Japanese speakers use the pitch accent to understand the word meaning.
However, accent-less dialect speakers process pitch changes in a similar way to individuals who learn a second language later in life.
“The results are surprising because both groups are native Japanese speakers who are familiar with the standard dialect,” the Riken team said.
Dr Yutaka Sato said the study shows that an individual’s language experience at a young age can shape the way languages are processed in the brain.
“Sufficient exposure to a language at a young age may change the processing of a second language so that it is the same as that of the native language,” he added.