Holding information in mind may mean storing it among synapses

Comparing models of working memory with real-world data, MIT researchers find information resides not in persistent neural activity, but in the pattern of its connections.

Between the time you read the Wi-Fi password off of a café’s menu board and the time you can get back to your laptop to enter it, you have to hold it in mind. If you’ve ever wondered how your brain does that, you are asking a question about working memory that researchers have striven for decades to explain. Now MIT neuroscientists have published a key new insight to explain how it works.

In a study in PLOS Computational Biology [1], scientists at The Picower Institute for Learning and Memory compared measure-ments of brain cell activity in an animal performing a working memory task with the output of various computer models representing two theories of the underlying mechanism for holding information in mind. The results strongly favoured the newer notion that a network of neurons stores the information by making shortlived changes in the pattern of their connections, or synapses, and contradicted the traditional alternative that memory is maintained by neurons remaining persistently active (like an idling engine).

While both models allowed for information to be held in mind, only the versions that allowed for synapses to transiently change connections (“short-term synaptic plasticity”) produced neural activity patterns that mimicked what was actually observed in real brains at work. The idea that brain cells maintain memories by being always “on” may be simpler, acknowledges senior author Earl K. Miller, but it doesn’t represent what nature is doing and can’t produce the sophisticated flexibility of thought that can arise from intermittent neural activity backed up by short-term synaptic plasticity.

“You need these kinds of mechanisms to give working memory activity the freedom it needs to be flexible,” says Miller, Picower Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS). “If working memory was just sustained activity alone, it would be as simple as a light switch. But working memory is as complex and dynamic as our thoughts.”

Co-lead author Leo Kozachkov, who earned his PhD at MIT in November for theoretical modelling work including this study, said matching computer models to real-world data was crucial.

“Most people think that working memory ‘happens’ in neurons — persistent neural activity gives rise to persistent thoughts. However, this view has come under recent scrutiny because it does not really agree with the data,” says Kozachkov, who was cosupervised by co-senior author Jean-Jacques Slotine, a professor in BCS and mechanical engineering. “Using artificial neural networks with short-term synaptic plasticity, we show that synaptic activity (instead of neural activity) can be a substrate for working memory. The important takeaway from our paper is: These ‘plastic’ neural network models are more brainlike, in a quantitative sense, and also have additional functional benefits in terms of robustness.”

Matching models with nature
Working alongside co-lead author John Tauber, an MIT graduate student, Kozachkov’s goal was not just to determine how working memory information might be held in mind, but to shed light on which way nature actually does it. That meant starting with “ground truth” measurements of the electrical “spiking” activity of hundreds of neurons in the prefrontal cortex of an animal as it played a working memory game. In each of many rounds the animal was shown an image that then disappeared. A second later it would see two images including the original and had to look at the original to earn a little reward. The key moment is that intervening second, called the “delay period”, in which the image must be kept in mind in advance of the test.

The team consistently observed what Miller’s lab has seen many times before: The neurons spike a lot when seeing the original image, spike only intermittently during the delay, and then spike again when the images must be recalled during the test (these dynamics are governed by an interplay of beta and gamma frequency brain rhythms). In other words, spiking is strong when information must be initially stored and when it must be recalled but is only sporadic when it has to be maintained. The spiking is not persistent during the delay.

Moreover, the team trained software “decoders” to read out the working memory information from the measurements of spiking activity. They were highly accurate when spiking was high, but not when it was low, as in the delay period. This suggested that spiking doesn’t represent information during the delay. But that raised a crucial question: If spiking doesn’t hold information in mind, what does?

MIT researchers compared the output (neural activity on the top and decoder accuracy on the bottom) associated with real neural data (left column) and several models of working memory to the right. The ones that best resembled the real data were the “PS” models featuring short-term synaptic plasticity. Image courtesy of the Miller Lab/Picower Institute

Modelling neural networks
Researchers including Mark Stokes at the University of Oxford have proposed [2] that changes in the relative strength, or “weights”, of synapses could store the information instead. The MIT team put that idea to the test by computationally modelling neural networks embodying two versions of each main theory. As with the real animal, the machine learning networks were trained to perform the same working memory task and to output neural activity that could also be interpreted by a decoder.

The upshot is that the computational networks that allowed for short-term synaptic plasticity to encode information spiked when the actual brain spiked and didn’t when it didn’t. The networks featuring constant spiking as the method for maintaining memory spiked all the time, including when the natural brain did not. And the decoder results revealed that accuracy dropped during the delay period in the synaptic plasticity models but remained unnaturally high in the persistent spiking models.

In another layer of analysis, the team created a decoder to read out information from the synaptic weights. They found that during the delay period, the synapses represented the working memory information that the spiking did not. Among the two model versions that featured short-term synaptic plasticity the most realistic one was called “PS-Hebb,” which features a negative feedback loop that keeps the neural network stable and robust, Kozachkov says.

Workings of working memory
In addition to matching nature better, the synaptic plasticity models also conferred other benefits that likely matter to real brains. One was that the plasticity models retained information in their synaptic weightings even after as many as half of the artificial neurons were “ablated”. The persistent activity models broke down after losing just 10-20 percent of their synapses. And, Miller added, just spiking occasionally requires less energy than spiking persistently.

Furthermore, Miller said, quick bursts of spiking, rather than persistent spiking, leaves room in time for storing more than one item in memory. Research has shown that people can hold up to four different things in working memory. Miller’s lab plans new experiments to determine whether models with intermittent spiking and synaptic weight-based information storage approriately match real neural data when animals must hold multiple things in mind rather than just one image.

References
1. Kozachkov L, Tauber J, Lundqvist M, et. al. (2022) Robust and brain-like working memory through short-term synaptic plasticity. PLoS Comput Biol 18(12): e1010776. https://doi.org/10.1371/journal.pcbi.1010776
2. Stokes M. ‘Activity-silent’ working memory in prefrontal cortex: a dynamic coding framework. Trends in Cognitive Sciences. June 04, 2015. doi: https://doi.org/10.1016/j.tics.2015.05.004