Event archive

Host: Max Planck Research Group "Neural Mechanisms of Human Communication"

Bharath Chandrasekaran, PhD | Neural systems in auditory and speech categorization

Guest Lecture
The neuropsychology of familiar people recognition through face and voice will be surveyed from the clinical and the cognitive point of view, taking into account modality-specific (prosopagnosia and phonagnosia) and multimodal person recognition disorders. Our starting assumption was that many patients with right temporal lobe atrophy are incorrectly labelled as prosopagnosics, because faces are often considered as the most important channel used to recognize familiar people. In fact a multimodal familiar person recognition disorder may more accurately characterise the deficit in these patients. The clinical and the cognitive implications of this starting point will be developed and some current research perspectives will be exposed. [more]
The efficient processing of social information from the environment is critical for survival. For example, direct gaze captures attention and is rapidly detected in a visual search task, a phenomenon known as the eye contact effect (Senju & Johnson 2009). Similarly, fearful stimuli has been shown to also capture attention and be detected faster than non-fearful stimuli. We thus used eye contact as a typical example of social information and fearful faces as an example of threat. Using behavioral measures like eye movements and skin conductance responses, in combination with neural measures from functional magnetic resonance imaging, we investigated whether and to what extent the processing of such information depends on awareness. In a series of experiments, we first showed that gaze direction can be processed unconsciously and typically developed (TD) adults have a bias towards faces with direct gaze even when these faces are presented outside of awareness. Interestingly, individuals with autism spectrum disorder (ASD) have an unconscious avoidance of direct gaze. Neurally, faces with direct gaze require less neural activity to reach awareness compared to faces with an averted gaze providing a neural basis for the access of direct gaze to awareness. Finally, we observed that eye movements were not biased towards an aversively conditioned fearful face when presented outside of awareness. These results suggest that eye gaze information is initially processed through a subcortical, ‘quick and dirty’ pathway involving the amygdala and superior colliculus. Further they indicate that eye avoidance in ASD is an involuntary and automatic effect. Finally, they show that awareness might be necessary to observe the commonly reported attentional bias towards aversive stimuli providing a limit for the unconscious processing of social stimuli. [more]
Abstract reasoning relies on a sequence of cognitive steps involving phases of task encoding, the structuring of solution steps, and their execution. On the neural level, metabolic neuroimaging studies have associated a distributed cognitive control or multiple-demand (MD) network with various aspects of abstract reasoning, and lesions within this network have been highly predictive for loss of fluid intelligence. By means of fMRI, I have specified the link between MD functions and fluid intelligence: low fluid intelligence has been associated with poor foregrounding of task-critical information across the MD system, accompanied by impaired performance. A second line of my research concerns the millisecond-by-millisecond neural dynamics in MD cortices. Evoked EEG-MEG source analyses revealed independent activation dynamics in frontal and parietal cortices within the first second of an abstract reasoning process. Oscillatory source power analyses allowed dissociating the memory and executive control functions underlying differential reasoning strategies. Together, my multi-method neuroimaging approach has provided insights into the anatomical, spatio-temporal, and oscillatory neural signatures of human abstract reasoning and fluid intelligence. [more]

Alexandra Jesse, Ph.D. | Speech perception in face-to-face communication

Guest Lecture
Go to Editor View