Chairs
Dr Liset M de la Prida, Instituto Cajal - CSIC, Spain
Dr Liset M de la Prida, Instituto Cajal - CSIC, Spain
Liset M de la Prida runs the Laboratorio de Circuitos Neuronales at the Instituto Cajal in Madrid. She graduated in Physics in 1994 and earned her PhD in Neuroscience in 1998. She has held visiting positions in the laboratories of David Brown (London), Leon Lagnado (Cambridge) and Steven J Schiff (USA) that allowed her to acquire a broad background and expertise to study brain function. After a postdoc with Richard Miles in Paris she got her research position at the Instituto Cajal in 2007. The main goal of her lab is to understand the function of hippocampal and para-hippocampal circuits. Dr de la Prida serves as an Editor for prestigious journals including the Journal of Neuroscience, Journal of Neuroscience Methods and eNeuro, and has commissioning duties in the American Epilepsy Society and the Spanish Society of Neuroscience. She is a leading international expert in the study of the basic mechanisms of ripples and fast ripples.
09:05-09:45
Memory reactivation and the apparent biological implausibility of CLST
Professor Bruce L McNaughton, University of California at Irvine, USA
Abstract
Two issues with CLST are: 1) consolidation of new memories appears to require many cycles of reactivation of new data interleaved with all previously acquired experience. Constraints on available reactivation time probably renders this unrealistic. 2) connectionist models avoid CI by retraining on all previous data, but the brain only has access to stored representations.
McClelland et al.'s (1995), 'catastrophic' introduction of a penguin into the network, without interleaved retraining, was less than completely catastrophic: almost all the error was in the animals. The plant category was hardly affected. This leads to the hypothesis that exhaustive reactivation is not always necessary, and can be substituted with 'Similarity Weighted Interleaved Learning (SWIL)' in which only stored items that are similar to new items (e.g. the other animals) need to be interleaved in the reactivation mix. Under what circumstances this does or does not work will be explored in Jay's talk. I propose a simple, attractor style, hypothesis about how SWIL might occur in hippocampal-cortical interactions. A possible solution to the second problem was proposed in 1995 by Robins, with his concept of pseudorehearsal, in which random patterns were interleaved with new data (https://arxiv.org/abs/1802.03875). SWIL could operate on a similar principle, plus a similarity weighting based on experience-dependent suppression of AHPs (Disterhoft TINS, 2006, 29:587), which could bias recently partially activated cortical attractors to reactivate spontaneously when "pseudopatterns" (i.e. random inputs) are presented (Shen, Hippocampus, 1996, 6:685). By definition, these would be stored patterns that overlap with the new input.
Show speakers
Professor Bruce L McNaughton, University of California at Irvine, USA
Professor Bruce L McNaughton, University of California at Irvine, USA
Bruce McNaughton’s research involves the physiological and computational basis of cognition, with particular focus on memory and memory disorders, and the dynamic interactions among neuronal populations and synaptic plasticity mechanisms that underlie these phenomena. Bruce has made significant contributions to understanding central synaptic plasticity mechanisms (eg, LTP cooperativity and behavioural correlates, synaptic facilitation, synaptic depression), spatial information processing in hippocampus and cortex, cortico-hippocampal interactions and memory consolidation, and brain ageing. Bruce’s work has been characterised by a strong interaction between neuroscience theory (including computational modelling) and experiment. His current main interest is the role of hippocampal outflow to neocortex in memory replay and memory consolidation and the extraction of knowledge from episodic memory.
09:45-10:30
Integration of new information in memory: new insights from a complementary learning systems perspective
Dr James L McClelland, Stanford University, USA
Abstract
According to complementary learning systems theory (CLST), integrating new memories into a brain-like neural networks without interfering with what is already known depends on a gradual learning process, interleaving new items with items previously learned. However, empirical findings now establish that information consistent with prior knowledge can sometimes be integrated quickly with little interference, and recent modeling research indicates that this finding can be captured in neural network models that reflect the properties of the neocortical learning system proposed in CLST. New work in collaboration with Bruce McNaughton and Andrew Lampinen uses deep linear neural networks in hierarchically structured environments to gain new insights into when integration is fast or slow and how integration might be made more efficient. The environments correspond to familiar taxonomic hierarchies, where items separated low in the tree (e.g., different species of sea gulls) share nearly all their properties, but items that separate at higher branches (sea gulls vs pine trees) share far fewer properties. Deep linear networks learn this kind of domain structure in a gradual, stage-like progression, capturing successive split in the hierarchy after increasingly long delays. In this context, a new item to can be characterized in terms of its projection onto the known hierarchy and whether it adds a new categorical split. The projection onto the known hierarchy can be learned rapidly without interleaving, but if the item has unique features or feature combinations requiring a new split, integration will require gradual interleaved learning. When the new item only overlaps with items in a branch of the hierarchy, interleaving can be focused on these items, with less interleaving overall. Discussion will consider how the brain might exploit these facts to make learning more efficient and will highlight predictions about what aspects of new information might be hard or easy to learn.
Show speakers
Dr James L McClelland, Stanford University, USA
Dr James L McClelland, Stanford University, USA
James L (Jay) McClelland is the founding director and current Co-Director of the Center for Mind, Brain, Computation and Technology at Stanford University. Over the past 40 years he has developed and applied neural network models to address a wide range of aspects of human cognition and their neural basis. He is co-author of the two-volume work Parallel Distributed Processing with David R Rumelhart. In collaboration with Bruce McNaughton and others, James developed the complementary learning systems framework for understanding the neural basis of learning and memory. McClelland is a Member of the National Academy of Sciences (USA) and is a Corresponding Fellow of the British Academy, and has received many honours and awards, including Heineken Prize in Cognitive Science.