Prof. Sascha Fruehholz | Neural system for perceiving and producing affective vocalizations

Guest Lecture

  • Date: Aug 10, 2017
  • Time: 04:00 PM - 05:00 PM (Local Time Germany)
  • Speaker: Prof. Sascha Fruehholz
  • University of Zurich Department of Psychology Cognitive and Affective Neuroscience
  • Location: MPI for Human Cognitive and Brain Sciences
  • Room: Wilhelm Wundt Room (A400)
  • Host: Independent Research Group "Neural Mechanisms of Human Communication"
Human vocalizations convey socially important information during auditory communication. Affective cues in voices are one type of this social information, and they represent important cues, which speakers use to encode their emotions during the production of vocalizations, and from which listeners infer the emotional state of the speaker when perceiving these vocalizations. Concerning the latter, the primate and especially the human brain seems to have developed a distributed network of brain regions that is involved in the perceptual and neural decoding of affective vocalizations. Using new methodical approaches in neuroimaging and data analysis techniques we recently began to understand this network in more details in terms of a functional description of brain regions, but also in terms of neural network dynamics. Recent data from our group point to a distributed cortico-subcortical functional network that might transpose acoustic cues of voices into a cognitive representation of their affective meaning. Social communication usually involves a perception-action cycle such that vocalizations are often produced in response to perceived vocalizations, and this cycle is supposed to support the understanding of others´ vocalizations in terms of embodied representations of perceived affect. We therefore recently also investigated the neural network dynamics, which underlie the production of human affective vocalizations using neuroimaging. Neural data from two recent studies in human individuals point to a distributed brain network that largely overlaps with the neural network during perception. This recent data also critically extending recent models of vocal production derived from animal studies by specifically highlighting the importance of auditory feedback during the production of vocal output.

Poster
Go to Editor View