According to a new study by UCL researchers, recent developments in generative AI assist to explain how memories allow us to learn about the environment, relive past events, and generate entirely new experiences for imagining and planning.
The study, which was published in Nature Human Behaviour, simulates how neural networks in the brain learn from and recall a sequence of events (each represented by a basic scene) using an artificial intelligence computational model called a generative neural network.
In order to examine their interactions, the model included networks that represented the neocortex and hippocampal regions. It is well known that during memory, imagination, and planning, both brain regions collaborate.
“Recent advances in the generative networks used in AI show how information can be extracted from experience so that we can both recollect a specific experience and also flexibly imagine what new experiences might be like,” said lead author and Ph.D. candidate Eleanor Spens (UCL Institute of Cognitive Neuroscience). When we conceive of remembering, we picture the past based on ideas, fusing our predictions of what might have happened with certain information that have been kept.”
In order to survive (e.g., to avoid danger or find food), humans must be able to make predictions. AI networks imply that replaying memories while at rest helps our brains identify patterns from previous experiences that may be utilized to create these predictions.
Researchers fed the model 10,000 pictures of straightforward scenarios. Every scene was quickly encoded by the hippocampus network as it was perceived. The generative neural network in the neocortex was then trained by repeatedly playing the scenes.
In order to recreate the scenes as patterns of activity in its thousands of output neurons (neurons that predict the visual information), the neocortical network learned to pass the activity of the thousands of input neurons (neurons that receive visual information) representing each scene through smaller intermediate layers of neurons (the smallest containing only 20 neurons).
The neocortical network was able to generate entirely new scenes as well as recreate existing ones thanks to this, as it learned incredibly effective “conceptual” representations of the scenes that capture their meaning (e.g., the configurations of walls and objects).
As a result, the hippocampus was able to concentrate its energies on encoding distinctive characteristics that the neocortex was unable to replicate, including novel types of objects, rather than needing to memorize every last aspect of every new scene that was shown to it.
The model describes how the neocortex gradually picks up conceptual knowledge and how this, in conjunction with the hippocampus, enables humans to mentally reconstruct past events to “re-experience” them.
The approach also explains why memories of past experiences frequently contain “gist-like” distortions, in which distinctive aspects are generalized and remembered as more like to those in earlier events, and how imagination and future planning can result in the creation of new events.
“The way that memories are re-constructed, rather than being veridical records of the past, shows us how the meaning or gist of an experience is recombined with unique details, and how this can result in biases in our memory,” said Professor Neil Burgess, senior author (UCL Institute of Cognitive Neuroscience and UCL Queen Square Institute of Neurology).
Topics #generative AI #human memory