Remaining group of von Kriegstein Neural Mechanisms of Human Communication
Although it seems easy, communicating with another person is an extremely difficult and complex task. In a conversation between two people, there is a continuous stream of dynamic information from several sensory modalities. Embedded into this continuous stream there is information which is important for successful interaction with others; this stream contains not only information about what is said, but also about the identity, character, social status or emotion of the speaker. The task of communication is made even more complex by the need to produce and recognize signals and their underlying meaning online, i.e., without much delay. It is fascinating that our brain can do all this given the sheer speed of communication e.g. the rapidly changing face movements and associated speech sounds. Currently it is impossible to build devices that can communicate as we do. The best computer programs developed to recognize speech or identify people are still far away from the capabilities of our brains.
The question is: How does the brain accomplish fast and robust communication? One way of finding out is to observe the brain and infer what neural mechanisms are used. To do this we perform experiments using a broad methodological approach (e.g. functional and structural MRI, MEG, tDCS, eye-tracking) and advanced analysis techniques. Our research involves different participant groups, i.e. healthy controls, as well as people with selective developmental or acquired deficits (developmental dyslexia, autism spectrum disorders, phonagnosia, and developmental prosopagnosia). In addition we have recently started to use the experimental findings on neural mechanisms to motivate computational models of human communication.
Currently, our work focuses on three aspects of auditory and face-to-face communication:
i. Speech recognition: How do we understand what somebody is saying?
ii. Person recognition: How do we recognize and identify others?
iii. How does information from different sensory modalities interact during face-to-face communication?