Anyone who has ever had to concentrate on an important task knows how difficult this is when people are talking around you. Especially, if the talk is unnecessary. The primary visual cortex, also known as V1, faces a similar problem time and again: In V1, the flashing of an image on the retina sets millions of neurons in motion. Their job is to segment and filter visual information to produce an accurate and useful representation of what the eye sees. To do this neighboring neurons talk to each other. It can be redundant if they see similar things.
V1 cannot ask for silence as we do. However, it has mechanisms to avoid unnecessary talking. One of these mechanisms may be predictive coding. It involves comparing predictions in the brain with actual sensory input and passing only the prediction errors to the next higher area of the visual cortex. Thus, neurons in V1 would predict or complement what their neighbors are trying to say because they see the same or very similar parts of an image. It prevents every neuron speaks up at the same time. And the brain can concentrate on more important things.
Tests with natural stimuli
Researchers at the Ernst Strüngmann Institute (ESI) for Neuroscience have now shown that synchronization and firing rate of neurons play important, albeit different, roles in this context. They recently published their findings in the prestigious journal Neuron.
The neuroscientists showed photos of flowers, trees, buildings, and other natural or man-made objects to three macaque monkeys. And simultaneously measured their brain activity in the primary visual cortex (V1). For natural stimuli, the information for the receptive fields of V1 can often be predicted by context. For example, in the case of a tree trunk, the outline, even if partially obscured, is likely to continue vertically. Or, in the case of an apple, slightly curved. To measure the predictability of the image features, the researchers programmed an artificial neural network.
Prediction errors increase firing rates
“We observed that neurons fire particularly strongly when prediction errors occur. By using deep neural networks, we were able to precisely pinpoint what kind of prediction errors V1 neurons care about. In principle, V1 neurons could predict all kinds of features in the image, its precise pixel structure for example. However, it turns out that they mostly care about predicting features that are important for the recognition of objects. This kind of predictability determines the salience of objects in an image, that is where humans would look at in an image,” says Cem Uran. He is a PhD student in the Vinck Lab and one of the first authors of the study, along with ESI-scientist Alina Peter.
The researchers also shed light on the long-standing question of the function of brain waves in the gamma frequency range. There have been many theories on this subject. A paper from Vinck & Bosman had proposed in 2016 that gamma brain waves occur when visual inputs can be well predicted, while other researchers were of the opposite opinion. Cem Uran adds: “Indeed, gamma brain waves are particularly strong when predictions are correct. Surprisingly, they were strongest for regions in the image with low salience, containing a lot of redundant information. We are proud to claim that it predicts the amplitude of gamma oscillations better than any competing model. With these results, we get a step closer to answering the question of how the brain processes information.”
Implications for gamma brain waves
Research group leader Martin Vinck specifies, that these findings have important implications for the function of gamma brain waves. They fit well with new evidence from their laboratory that these waves may suppress feedforward communication to higher areas rather than promote it: “You can think of gamma oscillations as an extreme state that occurs when all the information in an image is redundant and where V1 can already figure out what’s happening in the image without having to pass this information on to higher areas. But when there is something salient and non-predicted in the image, these gamma waves collapse and neurons increase their firing rates, which will lead to communication with higher brain areas.”
Ultimately, these findings may have important implications for how the brain learns in a self-supervised way from sensory data itself. The researchers speculate that the brain can continuously learn from the sensory inputs it receives by playing a “prediction game” where it tests its predictions against the incoming visual inputs. The kind of artificial network the researchers developed may emulate how the brain implements this. Which again could lead to a fundamental theory of learning in the visual cortex.
Uran C, Peter A, Lazar A, Barnes W, Klon-Lipok J, Shapcott KA, Roese R, Fries P, Singer W, Vinck M (2022). Predictive coding of natural images by V1 firing rates and rhythmic synchronization. Neuron. https://doi.org/10.1016/j.neuron.2022.01.002