ESI Lecture by Wolfgang Maass
Brain-inspired alternatives to deep learning
The way how recurrently connected networks of neurons in the brain acquire their astounding computing capabilities through learning has remained a mystery.
The gold standard from Machine Learning for training recurrent networks is a special case of deep learning: backpropagation through time (BPTT), where the recurrent network is virtually “unrolled” into a very deep feedforward network. But implementations of BPTT require propagation of error signals backwards in time and somewhat odd circuitry. Hence the brain is likely to use other methods. A discovery of such alternatives is also essential for neuromorphic engineering, where the lack of efficient but powerful learning methods for recurrent neural networks in neuromorphic hardware hinders progress in this technology. I will revisit the mathematical foundation of gradient descent in recurrent neural networks, and show that functionally powerful approximations to BPTT exist that do not come into conflict with biological data or efficiency issues. These new methods – termed e-prop – point to the importance of slow processes in neurons and synapses that can form eligibility traces for plasticity, in conjunction with specialized brain structures and mechanisms that provide learning signals for local populations of neurons.
Details can be found in the preprint: Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets, by Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass, arxiv 2019, https://arxiv.org/abs/1901.09049