Language – the big unknown
Language is our everyday tool. We chat, listen, discuss, write, think and formulate the whole day. Nevertheless, little is known so far about this natural given and still highly complex ability. Which speed rate is the best so that the other one picks up the most content? Why is language built as it is? And what happens when one of the crucial brain areas fails? Two new research groups at Max Planck Institute Human Cognitive and Brain Sciences (MPI CBS) face these questions – to bring more clarity into this human unique capability.
Everybody knows it: A person speaks to fast without any dot or comma and serves too much information in too little time. The receptivity is exceeded at any point even when you are focussed. At the same time we can hardly bear when a persons talks to slow and states too less content. There seems to be a given beat in which the brain extracts information. Lars Meyer and his research group are eager to find this language cycle.
Their ideas are based on classical findings of behavioural research: Words that are heard within a certain period are connected much closer to each other than those appearing before or afterwards. The researchers assume that these relations are attributed to a given working rhythm in the brain.
This becomes clear at ambiguous sentences that make a different sense depending on how the words are connected. The example „The man sees the woman with binoculars“ can be interpreted in two ways. Who has the binoculars – the man or the woman? Most people assign it to the man. Meyers suspicion: „The man sees the woman“ is spoken within the same time lapse of processing. „Binoculars“ falls into the next one. „Woman“ is therefore not associated with „binoculars“. And this is not because of a speaking break or any other linguistic emphasis. Listeners separate this sentence even than into these segments when its represented as monotone word order. Hence, the brain structures language not because of an acoustic stimulus – rather than independently.
The brain sets the beat
Why these time lapses? „The electrophysiology of our brain is cyclical. Our neurons don’t pick up information as a continuous data stream but as successive portions“, Lars Meyer says. The cycles’ duration accordingly set how long these portions are „Brrr-brrr for phrase“, the linguist explains while drawing a wave of almost three seconds, „Bip-bip-bip für single syllables“, and adding the short frequencies within the long waves.
What we do not know yet: How much is the brain able to process within each unit and which content is picked up particularly efficient? Within every processing cycle there are clenching different amounts of information depending on the length and number of words. Furthermore, do these time laps apply to every language on earth? Are they even the reason why from Russian to Japanese information are packed into bigger units such as words, phrases and sentences?
The scientists want to figure out these relations by using electro- and magneto-encephalography and by the help of a real classic. Translated into 25 languages the study participants listen to „The little prince“ in their own mother tongue and indicate at which point an information unit ends.
„These findings can help to optimize audio books, language learning programmes or even the lessons at schools or universities“, Meyer states. If you know how language should look like to grab the most of it we could design learning more efficiently.
When language suddenly fails
Learning plays also a crucial role in the work of Gesa Hartwigsen. But it is less about grabbing the most content rather than relearning language as the actual tool when it suddenly fails after a stroke or an accident. Her research group „Cognition and Plasticity“ is willing to reveal what happens in the brain when language areas are hurt and how it tries to repair the damage.
Former studies showed: It works differently well. In some cases the affected person could recover their language almost completely. One of the most important condition is which area is effected. The more basic the function that it overtakes the more unlikely that it could be regained. Which mechanisms operate in successful cases, in turn, is still unclear.
Hartwigsen and her team want to change that. Fort hat, they use basically two methods: functional magnetic resonance imaging (fMRI) for observing activities in the brain, and so-called transcranial magnetic stimulation (TMS). By the help of TMS single brain areas can be disturbed for a short time. Unlike a stroke there are no neurons damaged. The interruptions are short-term and temporarily and do just have an impact on the speed of information processing. Thereby, the scientists are able to watch how the brain reacts on such a failure – and if language suffers from that.
When they disturb the area processing the words’ meaning, for instance, the study participants do not have any language problems. They can still assign if „cat“ is a human made object or something naturally occurring. In parallel to this the neighboured regions become more active which originally deal with tone properties and which are part of the working memory. Obviously these structures are able to balance disturbances.
Language as a model for flexible adaption in the brain
But Hartwigsen wants more. Language serves her as a model to understand how the brain and his networks go along with changing conditions. How does is it react to further challenges, for instance? And what happens in general when single parts are failing independently from the language system?
It is known so far that the brain is flexible over the while life span. It always tries to adapt to new conditions. According to the situation it falls back on different mechanisms. In order to understand them it is worth to take a look behind the curtain of higher cognitive functions such as language, social interactions and solving problems. The brain processes these functions in networks. Therein single hubs overtake certain roles which merge into the actual capability. If one of these hubs is impaired or overloaded it cannot fulfil its task anymore. The brain looks for alternatives.
Up to now, there are three emergency plans known which Hartwigsen brought together in her compensation model. One way: The brain drives up another hub in the same network and shifts some work to that one. In a second option the brain grabs the pendant in the other hemisphere. Language networks, for instance, lay mostly in the left hemisphere. Than, the so far barely used homologues on the right side provide support. If those mechanisms are not sufficient enough the brain can still help itself by areas which originally do not have something to do with the actual ability rather than with general functions such as attention or working memory. General functions can thereby support special functions. This phenomenon can be especially observed in stroke patients. When their language recovers the brain mostly activated just these attention and memory networks
Yet, researching language does not just mean to understand the brain’s beat or its capability to adapt or recover. It also means to comprehend what makes humans to humans, says Angela D. Friederici, head of the department of neuropsychology and director MPI CBS. She says: “Apes communicate, humans have language.”