Greta Tuckute | Many but not all deep neural network audio models capture brain responses and exhibit hierarchical region correspondence

Guest Lecture

  • Date: Dec 2, 2022
  • Time: 03:00 PM - 04:00 PM (Local Time Germany)
  • Speaker: Greta Tuckute
  • Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, USA
  • Location: MPI for Human Cognitive and Brain Sciences
  • Room: Zoom Meeting
  • Host: CBS CoCoNUT
Deep neural networks are commonly used as models of the visual system, but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models. We evaluated brain-model correspondence for publicly available audio neural network models along with in-house models trained on four different tasks. Most tested models out-predicted previous filter-bank models of auditory cortex, and exhibited systematic model-brain correspondence: middle stages best predicted primary auditory cortex while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results suggest the importance of task optimization in constraining brain representations.
Go to Editor View