Ruben van Bergen, PhD
Profile picture 3 smaller.png

I build probabilistic recurrent neural networks to understand active inference in brains and machines

Logo v2-01.png

Selected Publications

(For a full list, see my Google Scholar profile)

 

van Bergen & Kriegeskorte (2020). Going in circles is the way forward: the role of recurrence in visual inference. Current Opinion in Neurobiology. (pdf)

Artificial neural networks (ANNs) are used a lot in artificial intelligence these days. With modern learning algorithms and computer hardware, they can be trained to do very complex tasks, such as play video games or translate text from one language to another. Sometimes these artificial networks even perform better than the neural networks contained in human skulls - that is, our brains. Neuroscientists, trying to better understand brains, have also used neural network models, in order to test their hypotheses in computer simulations. One major focus of neuroscience research tries to understand the neural computations of visual perception. Interestingly, some ANNs are getting pretty good at visual perception as well - almost as good as humans, some would argue. But there is one curious discrepancy between ANNs used in computer vision, and their biological cousins. ANNs used for visual tasks are almost exclusively feedforward. This means that information is processed in a linear sequence of steps. Each processing stage in the network performs its computation, and forwards the result of that computation to the next processing stage, and so on, until the final processing stage gives the answer, or performs the action the network was made for. Crucially, information is never sent backwards. It never loops around to be processed again by the same neurons. This is not at all like what happens in the brain, which is very loopy indeed. Different areas in the brain, including the visual cortex, pass information back and forth, and neurons in the same area also communicate heavily amongst each other. So what gives? Why do ANNs attack the same problem so differently, if the brain solved it another way? Or if feedforward ANNs are apparently so successful, then why does the brain bother with a more complicated, loopy wiring solution? In this review paper, we lay out the possible roles that recurrent (loopy) connections might fulfil when it comes to visual perception. We make the argument that the discrepancy between brains and ANNs is not just a quirk of biology, but that it has fundamental computational roles to play that might equally benefit computer vision.

 

van Bergen & Jehee (2019). Probabilistic representation in human visual cortex reflects uncertainty in serial decisions. Journal of Neuroscience. (pdf)

Human observers show a serial dependence effect when judging orientations: they tend to err towards the orientation of previously seen stimuli. But could this apparently errant behavior actually reflect a helpful strategy? After all, we live in a world that doesn’t change much from moment to moment, so we can generally assume that most things are pretty stable (unless the evidence says otherwise). And if we really do things right, as a statistically ideal observer would, then we should rely more on this stability-assumption when we aren’t really sure about what we just saw. That is, when our current sensory information is uncertain, we should rely more on previous sensory observations. This is the hypothesis that we tested in this study. We used our probabilistic decoding method to measure the trial-by-trial uncertainty in stimulus representations in visual cortex. Consistent with our predictions, we found that our participants showed a stronger serial dependence bias on trials where their uncertainty had increased with respect to the previous trial. This suggests that serial dependence may arise from a perceptual inference strategy that is statistically optimized for natural environments.

 

van Bergen & Jehee (2018). Modeling correlated noise is necessary to decode uncertainty. NeuroImage. (pdf)

Previously, my collaborators and I had developed a machine learning algorithm to decode probability distributions from brain activity in visual cortex (see below). These distributions can tell us how precisely information about the world is encoded in brain activity, or how much uncertainty it contains. In this paper, we explain some important insights that went into the design of this algorithm. These insights relate to the noise in the brain activity in visual cortex that we can measure with fMRI. Just as a 2D image consists of pixels, fMRI brain images are 3D and consist of voxels. Within each voxel (a little cube with sides of about 2 mm) we measure brain activity every two seconds. When trying to interpret what the activity in each voxel means, it is important to understand how much of that voxel’s response is actually meaningless noise. This was clear from the start. But this meaningless noise isn’t always specific to individual voxels. Often voxels share noise, like a group of friends sharing untrue gossip. When you hear the same rumor repeated by five of your friends, you might start to believe it. But if you then find out that they all heard it from the same person, you might reconsider. Similarly, when our decoding algorithm tries to interpret the activity in fMRI voxels, it needs to know that some of these voxels are in “gossip cliques” where false rumors can spread. We found that, if we didn’t equip the algorithm with this knowledge, it performed very badly. Fortunately, we also figured out a simple way to chart the pattern of noise-sharing across the visual cortex, in broad strokes that were good enough to discount most of the “gossip”.

 

van Bergen, Ma, Pratte & Jehee (2015). Sensory uncertainty decoded from visual cortex predicts behavior. Nature Neuroscience. (pdf)

When you look at something, a flurry of electrical activity happens in the back of your brain, in the visual cortex. This electrical activity tells other areas of the brain what it is that you are looking at. But there’s a problem: the information isn’t totally reliable. Like a badly tuned radio, there is noise on the line, and other factors that conspire to make the image on your retina ambiguous. Sometimes the information is more reliable than other times, but there is always some uncertainty associated with your visual input (and indeed, any sensory input). In some cases, you may be acutely aware of this uncertainty, such as when you’re in a dark environment where objects and people are hard to make out. But even when you don’t notice it, sensory uncertainty may still play a role “under the hood”, as the neurons in your brain are trying to combine uncertain pieces of information into something that makes sense. Statistical theory tells us that the best way to do this is to weight each piece of information by its uncertainty, and indeed this is what people seem to do if we look at their behavior in simple perceptual tasks. But this raises an important question: how does the brain know how much uncertainty there is in sensory input at any given time, and then how is this uncertainty reflected in neural activity?

One influential theory proposes that the brain expresses uncertainty by assigning probabilities to all the different ways that a piece of sensory input could be interpreted. Mathematically, such probability distributions are the most natural way to encode uncertain information. To see whether the brain uses a similar scheme, we developed a computer algorithm that decodes probability distributions from brain activity in the visual cortex, measured with fMRI. These distributions are the algorithm’s best guess of the information represented in a participant’s visual cortex, just after viewing an image that we flashed on a screen. But do these decoded distributions really reflect a representation of sensory uncertainty in the brain? Our findings suggest that they do, for two reasons. First, we observed that participants couldn’t tell us very accurately what image they had just seen, when our decoder indicated high uncertainty in visual cortex. Second, we found evidence that for those same images, participants were also more willing to rely on other information than their visual input - presumably because the visual input was flagged in their brain as uncertain. Together, these findings suggest that activity in visual cortex carries a representation of the uncertainty in visual input.

 
Hanoi-14-(Klein-meer-panorama).jpg
 

Resume

 
 
HTv3o6Ib_400x400%5B1%5D.jpg
ColumbiaLogo.png
DONDERS_LOGO_RGB_large.png

Academic positions

Postdoc in the Thill Lab
Radboud University
Oct 2021 - present

Postdoc in the Kriegeskorte lab
Zuckerman Institute, Columbia University
Feb 2019 – Sep 2021

Lecturer in the AI Bachelor programme
Radboud University
Feb 2021 - Jul 2021

Lecturer in Human Brain Imaging
Department of Psychology, Columbia University
Jan - May 2020

 

Postdoc in the Jehee lab
Donders Institute, Radboud University
Dec 2016 – Nov 2018

PhD Candidate in the Jehee lab
Donders Institute, Radboud University
April 2012 – Nov 2016

Research Assistant in the Jehee lab
Donders Institute, Radboud University
Dec 2011 – March 201

 
 
DONDERS_D_RGB_large.png
OxfordLogo.png
 
1200px-Utrecht_University_logo.svg.png
 

Education

PhD
Advisor: Dr. Janneke Jehee
Thesis: Sensory uncertainty and response variability in human visual cortex (pdf)
Degree awarded Sep 2017

MSc in Neuroscience
University of Oxford
2011

BSc (magna cum laude)
in Liberal Arts & Sciences

University College Utrecht
Major: Cognitive Neuroscience & Cell Biology
2010

 
Localizer_Activation_feathered.png
SensoryIntegration.png

Skills & expertise

Neuroscience

Psychophysics

(Deep) neural networks

Machine learning

Matlab

Visual perception

Computational models

Python + pytorch

(Approximate) probabilistic inference

fMRI

Statistics

Fluent Dutch & English

Contact

color-3.png