Max Planck Research Group "Auditory Cognition"
… the group’s newsfeed can be visited under http://obleserlab.com/.
AC IS MOVING ON IN 2016
Please note that as of January 2016, the Research group “Auditory Cognition” of Prof. Obleser will be placed exclusively at the University of Lübeck. The newsfeed remain under: ObleserLab.com / AuditoryCognition.com
With our research, we hope to foster a unique cognitive neuroscience perspective on challenging listening situations, age-related hearing loss, and the possibilities of successful adaptation to it.
Audition poses particular challenges to neuroscience: First, the “bottom-up” processes of acoustically decoding and neurally encoding the auditory signal along the central auditory pathways are not well understood. Second, humans cope surprisingly well with various sorts of occlusions, deletions, and degradations in their auditory input—in phone lines and at noisy parties, in chronic hearing damage, or, most drastically, when living with a cochlear implant.
Our group is interested in the following main questions:
How does the human brain analyse, categorise, and interpret meaningful sounds such as speech, particularly under substantial degradation?
How do contextual cues facilitate this process: Semantic context, and also simple temporal or spectral regularities of sound can shape the neural processing as well as facilitate the integration of information.
How can cognitive mechanisms effortfully compensate for degraded sound: Executive functions like working memory and cognitive control clearly support successful coping with degradation; their neural interfacing with auditory processes is unclear, however, and of particular relevance to our work.
These key questions touch on speech and hearing, psychology and neuroscience alike. We pursue them using listening and learning experiments and various methods of brain imaging.
First, we ask which brain areas within the auditory cortex, and beyond, contribute critically to the emergence of meaningful auditory and speech percepts, and how do they interact? This is investigated mainly using fMRI.
Second, we study the oscillatory brain dynamics using M/EEG to infer brain states that precede and accompany successful speech comprehension. In short, what are good indicators of facilitation and compensation in the time–frequency domain?
Third, we aim to isolate individual markers of auditory skills, cognitive ability, and brain structure that can help us predict the extent to which listeners will be able to cope with adverse listening situations.
Answers to these questions will further our knowledge on the listening brain as well as on the human faculty of speech comprehension. They will also eventually be useful in developing new approaches to the treatment of hearing disorders.