Event archive

Professor Svenja Caspers | Interindividual variability of brain phenotypes – towards population neuroimaging

Guest Lecture

Prof. Eleanor A. Maguire | Building mental representations: from scenes to events

Mind Meeting

Prof. Jens Meiler | Innovative Computational Methods for Protein Structure Prediction, Drug Discovery, and Therapeutic Design

Guest Lecture

PhD Louise P. Kirsch | What’s so special about touch? A multidimensional approach to study social touch

Guest Lecture

Prof. Russell Poldrack | What's wrong with neuroimaging research, and how can we make it right?

Guest Lecture

Dr Marlene Bönstrup | Low-frequency brain oscillations as a target for an on-demand brain stimulation in human motor rehabilitation

Cognitive Neurology Lecture

| Software Solutions for Modeling and Analyzing Brain Dynamics at Different Scales

Workshop

PhD Katherine Storrs | Learning About the World By Learning About Images

Guest Lecture
Computational visual neuroscience has come a long way in the past 10 years. For the first time, we have fully explicit, image-computable models that can recognise objects with near-human accuracy, and predict brain activity in high-level visual regions. I will present evidence that diverse deep neural network architectures all predict brain representations well, and that task-training and subsequent reweighting of model features is critical to this high performance. However, vision is not yet explained. The most successful models are deep neural networks that have been supervised using ground-truth labels for millions of images. Brains have no such access to the ground truth, and must instead learn directly from sensory data. Unsupervised deep learning, in which networks learn statistical regularities in their data by compressing, extrapolating or predicting images and videos, is an ecologically feasible alternative. I will show that an unsupervised deep network trained on an environment of 3D rendered surfaces with varying shape, material and illumination, spontaneously comes to encode those factors in its internal representations. Most strikingly, the network makes patterns of errors in its perception of material which follow, on an image-by-image basis, the patterns of errors made by human observers. Unsupervised deep learning may provide a coherent framework for how our perceptual dimensions arise. [more]
Go to Editor View