Recognizing Recurrent Neural Networks
Recurrent neural networks (RNNs) model the abundant recurrent connectivity among neurons in cortex. They have been used by computational neuroscientists to model brain processes like perception, memory and attention, but they are also the basis of many machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. In a wide range of technical applications it has been shown that this simple but powerful scheme can be used for efficient classification of both static and dynamic real-world stimuli. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute.
We suggest that the RNN approach may be made more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive a Bayesian agent that can decode its output. Critically, this Bayesian agent is a 'recognizing RNN', in which neurons compute and exchange predictions and prediction errors. The recgonizing RNN has several desirable features that a standard RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. Consequently, we suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the scheme by an application to the online decoding (i.e. recognition) of human kinematics.
References
This work has been published in: Bitzer, S. and Kiebel, S. J. Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks. Biological Cybernetics, 2012, doi.