Event archive

Host: CBS CoCoNUT
Please join via zoom: https://zoom.us/j/98507777689 [more]

Mariya Toneva | Convergence and divergence between language models and human brains

Guest Lecture

Prof. Felix Biessmann | Code Reviews, Testing and Documentation - Improving Processes in Data-Driven Software Development

Guest Lecture

PhD Kamila Jozwik | Disentangling and modelling face perception and animacy representation

Guest Lecture

Prof. Drew Linsley | Harmonizing the object recognition strategies of deep neural networks with humans

Guest Lecture

Greta Tuckute | Many but not all deep neural network audio models capture brain responses and exhibit hierarchical region correspondence

Guest Lecture

PhD Laurent Caplette | Characterizing mental representations using deep image synthesis and behavior

Guest Lecture

Dr Johannes Jäger | How Organisms Come to Know the World: Fundamental Limits on Artificial General Intelligence

Guest Lecture

PhD Joscha Bach | Vectors of Intelligence: making sense of intelligent systems with universal capabilities

Guest Lecture

Andrew Brock | Understanding when Deep Nets are trainable: Busting Batchnorm, Clipping Gradients, and Plotting Everything.

Guest Lecture
Under what conditions will a neural network train, and under what conditions will it train well? Years of experimentation have led the community to develop a fairly robust recipe book for training deep nets on common tasks, and a series of slightly delayed efforts have built up a reasonably deep understanding of the mechanisms underlying the success of these techniques. In this talk, I’ll discuss our recent work on understanding and improving signal propagation in deep neural networks, with a focus on the process by which one might discover and visualize quantities of interest, use that knowledge to ground the development of new techniques in empirical understanding, and maybe land an ImageNet SOTA or two in the process. [more]
Show more
Go to Editor View