Beyond the symbols vs signals debate
Discussion meeting organised by Professor Marta Kwiatkowska FRS, Professor Peter Dayan FRS, Professor Tom Griffiths and Professor Doina Precup.
Building artificial intelligence systems that can emulate human intelligence will need to draw on the complementary strengths of machine learning and compositional reasoning, and take advantage of symbolic knowledge as well as raw signals. This meeting aims to advance our scientific understanding of how human learning, reasoning and cognition can be brought to bear on the engineering foundations of AI.
Poster session
There will be a poster session on Monday 28 October. If you would like to present a poster, please submit your proposed title, abstract (up to 200 words), author list, and the name of the proposed presenter and institution to the Scientific Programmes team no later than 17 September 2024. Please include the text 'Poster submission- Symbols vs signals' in the email subject line.
Attending the meeting
This event is free to attend and intended for researchers in the field.
Both virtual and in-person attendance is available, but advance registration is essential.
Lunch is available on both days of the meeting and is optional. There are plenty of places to eat nearby if you would prefer to purchase food offsite. Participants are welcome to bring their own lunch to the meeting.
Enquiries: contact the Scientific Programmes team.
Organisers
Schedule
Chair
Professor Marta Kwiatkowska FRS, University of Oxford, UK
Professor Marta Kwiatkowska FRS, University of Oxford, UK
Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Her research is concerned with developing modelling and analysis methods for complex systems, such as those arising in computer networks, electronic devices and biological organisms. She is known for fundamental contributions to the theory and practice of model checking for probabilistic systems and is currently focusing on safety and robustness of automated decision-making in artificial intelligence. Kwiatkowska led the development of the PRISM model checker, which has been adopted in diverse fields, including security, robotics, healthcare and DNA computing. She has been the recipient of two ERC Advanced Grants, the Royal Society Milner Award, the BCS Lovelace Medal, and received an honorary doctorate from KTH Royal Institute of Technology. She holds the title of Professor awarded by the President of Poland and is a Fellow of the Royal Society, Fellow of ACM, Member of Academia Europea and American Academy of Arts and Sciences.
09:00-09:10 |
Welcome by the Royal Society and lead organiser
|
---|---|
09:10-09:15 |
Chair's introduction: Neurosymbolic
Professor Marta Kwiatkowska FRS, University of Oxford, UK
Professor Marta Kwiatkowska FRS, University of Oxford, UKMarta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Her research is concerned with developing modelling and analysis methods for complex systems, such as those arising in computer networks, electronic devices and biological organisms. She is known for fundamental contributions to the theory and practice of model checking for probabilistic systems and is currently focusing on safety and robustness of automated decision-making in artificial intelligence. Kwiatkowska led the development of the PRISM model checker, which has been adopted in diverse fields, including security, robotics, healthcare and DNA computing. She has been the recipient of two ERC Advanced Grants, the Royal Society Milner Award, the BCS Lovelace Medal, and received an honorary doctorate from KTH Royal Institute of Technology. She holds the title of Professor awarded by the President of Poland and is a Fellow of the Royal Society, Fellow of ACM, Member of Academia Europea and American Academy of Arts and Sciences. |
09:15-09:45 |
Educability
We seek to define the capability that has enabled humans to develop the civilisation we have, and that distinguishes us from other species. For this it is not enough to identify a distinguishing characteristic - we want a capability that is also explanatory of humanity's achievements. "Intelligence" does not work here because we have no agreed definition of what intelligence is or how an intelligent entity behaves. We need a concept that is behaviourally better defined. The definition will need to be computational in the sense that the expected outcomes of exercising the capability need to be both specifiable and computationally feasible. This formulation is related to the goals of AI research but is not synonymous with it, leaving out the many capabilities we share with other species. We make a proposal for this essential human capability, which we call "educability." It synthesises abilities to learn from experience, to learn from others, and to chain together what we have learned in either mode and apply that to particular situations. It starts with the now standard notion of learning from examples, as captured by the Probably Approximately Correct model and used in machine learning. The ability of Large Language Models learning from examples to generate smoothly flowing prose lends encouragement to this approach. The basic question then is how to extend this approach to encompass broader human capabilities beyond learning from examples. This is what the educability notion aims to answer. Professor Leslie Valiant FRS< Harvard University, USA
Professor Leslie Valiant FRS< Harvard University, USALeslie Valiant was educated at King's College, Cambridge; Imperial College, London; and Warwick University where he received his PhD in computer science in 1974. He is currently T Jefferson Coolidge Professor of Computer Science and Applied Mathematics in the Division of Engineering and Applied Sciences at Harvard, where he has taught since 1982. Before coming to Harvard, he had taught at Carnegie-Mellon University, Leeds University, and the University of Edinburgh. His work has ranged over several areas of theoretical computer science, particularly complexity theory, computational learning, and parallel computation. He also has interests in computational neuroscience, evolution and artificial intelligence. He received the Nevanlinna Prize at the International Congress of Mathematicians in 1986; the Knuth Award in 1997; the EATCS award in 2008; and the ACM AM Turing Award in 2010. He is a Fellow of the Royal Society (London) and a member of the National Academy of Sciences (USA). |
09:45-10:00 |
Discussion
|
10:00-10:30 |
How to make logics neurosymbolic
Neurosymbolic AI (NeSy) is regarded as the third wave in AI. It aims at combining knowledge representation and reasoning with neural networks. Numerous approaches to NeSy are being developed and there exists an `alphabet-soup' of different systems, whose relationships are often unclear. I will discuss the state-of-the art in NeSy and argue that there are many similarities with statistical relational AI (StarAI). Professor Luc De Raedt, KU Leuven and Örebro University, Belgium
Professor Luc De Raedt, KU Leuven and Örebro University, BelgiumProfessor Dr Luc De Raedt is Director of Leuven.AI, the KU Leuven Institute for AI, full professor of Computer Science at KU Leuven, and guest professor at Örebro University (Sweden) at the Center for Applied Autonomous Sensor Systems in the Wallenberg AI, Autonomous Systems and Software Program. He is working on the integration of machine learning and machine reasoning techniques, also known under the term neurosymbolic AI. He has chaired the main European and International Machine Learning and Artificial Intelligence conferences (IJCAI, ECAI, ICML and ECMLPKDD) and is a fellow of EurAI, AAAI and ELLIS, and member of Royal Flemish Academy of Belgium. He received ERC Advanced Grants in 2015 and 2023. |
10:30-10:45 |
Discussion
|
10:45-11:15 |
Break
|
11:15-11:45 |
Planning, reasoning, and generalisation in deep learning
What do we need to build artificial agents which can reason effectively and generalise to new situations? An oft-cited claim, both in cognitive science and in machine learning, is that a key ingredient for reasoning and generalisation is planning with a model of the world. In this talk, Dr Hamrick will evaluate this claim in the context of model-based reinforcement learning, presenting evidence that demonstrates the utility of planning for certain classes of problems (e.g. in-distribution learning and procedural generalisation in reinforcement learning), as well as evidence that planning is not a silver bullet for out-of-distribution generalisation. In particular, generalisation performance is limited by the generalisation abilities of the individual components required for planning (e.g., the policy, reward model, and world model), which in turn are dependent on the diversity of data those components are trained on. Moreover, generalisation is strongly dependent on choosing the appropriate level of abstraction. These concerns may be partially addressed by leveraging new state-of-the-art foundation models, which are trained on both an unprecedented breadth of data and at a higher level of abstraction than before. Dr Jessica Hamrick, Google DeepMind, UK
Dr Jessica Hamrick, Google DeepMind, UKDr Hamrick is a Staff Research Scientist at Google DeepMind where she co-leads the PRISM (Planning, Reasoning, Inference & Structured Models) team. Her current work focuses on improving the reasoning capabilities of large language models, and she has previously worked on topics spanning model-based reinforcement learning, planning, graph neural networks, and computational modelling of mental simulation. Dr Hamrick received her PhD (2017) in Psychology from the University of California, Berkeley and her BS and MEng (2012) in Computer Science from the Massachusetts Institute of Technology. She is a recipient of the Berkeley Fellowship, the NSF Graduate Fellowship, and the ACM Software System Award. Dr Hamrick’s work has been published in numerous venues including ICLR, ICML, NeurIPS, and PNAS. |
11:45-12:00 |
Discussion
|
12:00-12:30 |
The role of rationality in modern AI
The classical approach to AI was to design systems that were rational at run-time: they had explicit representations of beliefs, goals, and plans and ran inference algorithms, online, to select actions. The rational approach was criticised (by the behaviourists) and modified (by the probabilists) but persisted in some form. Now the overwhelming success of the connectionist approach in so many areas presents evidence that the rational view may no longer have a role to play in AI. This talk examines this question from several perspectives, including whether the rationality is present at design-time and/or at run-time, and whether systems with run-time rationality might be useful from the perspectives of computational efficiency, cognitive modelling and safety. It will present some current research focused on understanding the roles of learning in runtime-rational systems with the ultimate aim of constructing general-purpose human-level intelligent robots. Professor Leslie Pack Kaelbling, Massachusetts Institute of Technology, USA
Professor Leslie Pack Kaelbling, Massachusetts Institute of Technology, USALeslie is a Professor at Massachusetts Institute of Technology. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford and was previously on the faculty at Brown University. She was the founding editor-in-chief of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning. |
12:30-12:45 |
Discussion
|
Chair
Professor Peter Dayan FRS, Max Planck Institute for Biological Cybernetics, Germany
Professor Peter Dayan FRS, Max Planck Institute for Biological Cybernetics, Germany
Peter Dayan is a Director at the Max Planck Institute for Biological Cybernetics and a Professor at the University of Tübingen. His interests include affective decision making, neural reinforcement learning and computational psychiatry.
13:30-13:35 |
Chair's introduction: Neuroscience
Professor Peter Dayan FRS, Max Planck Institute for Biological Cybernetics, Germany
Professor Peter Dayan FRS, Max Planck Institute for Biological Cybernetics, GermanyPeter Dayan is a Director at the Max Planck Institute for Biological Cybernetics and a Professor at the University of Tübingen. His interests include affective decision making, neural reinforcement learning and computational psychiatry. |
---|---|
13:35-14:05 |
Representing the future
To flexibly adapt to new situations, our brains must understand the regularities in the world, but also in our own patterns of behaviour. A wealth of findings is beginning to reveal the algorithms we use to map the outside world. In contrast, the biological algorithms that map the complex structured behaviours we compose to reach our goals remain enigmatic. Here we reveal a neuronal implementation of an algorithm for mapping abstract behavioural structure and transferring it to new scenarios. We trained mice on many tasks which shared a common structure organising a sequence of goals but differed in the specific goal locations. Animals discovered the underlying task structure, enabling zero-shot inferences on the first trial of new tasks. The activity of most neurons in the medial Frontal cortex tiled progress-to-goal, akin to how place cells map physical space. These “goal-progress cells” generalised, stretching and compressing their tiling to accommodate different goal distances. In contrast, progress along the overall sequence of goals was not encoded explicitly. Instead, a subset of goal-progress cells was further tuned such that individual neurons fired with a fixed task-lag from a particular behavioural step. Together these cells implemented an algorithm that instantaneously encoded the entire sequence of future behavioural steps, and whose dynamics automatically retrieved the appropriate action at each step. These dynamics mirrored the abstract task structure both on-task and during offline sleep. Our findings suggest that goal-progress cells in the medial frontal cortex may be elemental building blocks of schemata that can be sculpted to represent complex behavioural structures. Professor Timothy Behrens FRS, University of Oxford and University College London, UK
Professor Timothy Behrens FRS, University of Oxford and University College London, UKTim Behrens works at Oxford and University College London and is interested in how the front of the brain works. |
14:05-14:20 |
Discussion
|
14:20-14:50 |
Language is distinct from thought in the human brain
Dr Fedorenko seeks to understand how humans understand and produce language, and how language relates to, and works together with, the rest of human cognition. She will discuss the ‘core’ language network, which includes left-hemisphere frontal and temporal areas, and show that this network is ubiquitously engaged during language processing across input and output modalities, strongly interconnected, and causally important for language. This language network plausibly stores language knowledge and supports linguistic computations related to accessing words and constructions from memory and combining them to interpret (decode) or generate (encode) linguistic messages. Importantly, the language network is sharply distinct from higher-level systems of knowledge and reasoning. First, the language areas show little neural activity when individuals solve math problems, infer patterns from data, or reason about others’ minds. And second, some individuals with severe aphasia lose the ability to understand and produce language but can still do math, play chess, and reason about the world. Thus, language does not appear to be necessary for thinking and reasoning. Human thinking instead relies on several brain systems, including the network that supports social reasoning and the network that supports abstract formal reasoning. These systems are sometimes engaged during language use—and thus have to work with the language system—but are not language-selective. Many exciting questions remain about the representations and computations in the systems of thought and about how the language system interacts with these higher-level systems. Furthermore, the sharp separation between language and thought in the human brain has implications for how we think about this relationship in the context of AI models, and for what we can expect from neural network models trained solely on linguistic input with the next-word prediction objective. Professor Evelina Fedorenko, Massachusetts Institute of Technology, USA
Professor Evelina Fedorenko, Massachusetts Institute of Technology, USADr Ev Fedorenko is a cognitive neuroscientist who studies the human language system. She received her bachelor’s degree from Harvard in 2002, and her PhD from Massachusetts Institute of Technology (MIT) in 2007. She was then awarded a K99R00 career development award from NIH. In 2014, she joined the faculty at MGH/HMS, and in 2019 she returned to MIT where she is currently an Associate Professor of Neuroscience in the BCS Department and the McGovern Institute for Brain Research. Dr Fedorenko uses fMRI, intracranial recordings and stimulation, EEG, MEG, and computational modelling, to study adults and children, including those with developmental and acquired brain disorders, and otherwise atypical brains. |
14:50-15:05 |
Discussion
|
15:05-15:30 |
Break
|
15:30-16:00 |
Neither nature nor nurture: the semiotic infrastructure of symbolic reference
Use of the symbol concept suffers from a conflation of two interpretations: a) a conventional sign vehicle (an alphanumeric character), and b) a conventional reference relationship (word meaning). Both are often mischaracterised in terms of "arbitrarity," a negative attribute. When your computer begins randomly displaying characters (a) on your screen, they are generally interpreted as indications of malfunction (or the operation of a "viral" algorithm). And yet when LLMs print out strings of characters that are iconic of interpretable sentences, we assume (b) that they are more than mere icons and indices of an algorithmic (aka mechanistic) process. This begs the question of what distinguishes symbolic interpretation from iconic and indexical interpretation and how they are related. Conventional relations are not just "given," however, they must be acquired. As a result, they are dependent on prior non-conventional referential relations (i.e. iconic and indexical interpretive processes) to extrinsically "ground" the reference of these intrinsically "ungrounded" sign vehicles. This semiotic infrastructure exemplifies the hierarchic complexity of symbolic reference, why it is cognitively difficult for non-humans, and hints at the special neurological architecture that aids human symbolic cognition. It also is relevant for understanding the difference between how humans and generative AI systems produce and process the tokens used as sign vehicles. So far, LLMs and their generative cousins are structured by token-token iconic and indexical relations only (though of extremely high dimensionality), not externally grounded by iconic and indexical pragmatic relations, even though the token-token relations of the training data have been. Professor Terrence W Deacon, University of California, USA
Professor Terrence W Deacon, University of California, USAProfessor Deacon has held faculty positions at Harvard University, Harvard Medical School, Boston University, and the University of California, Berkeley, where he is currently the Davis Chair Distinguished Professor of Anthropology as well as on the faculty of the Institute for Cognitive and Brain Sciences. His laboratory research has focused on comparative and developmental neuroanatomy, particularly of humans, and includes the study of species differences using quantitative, physiological, and cross-species fetal neural transplantation techniques. His 1997 book The Symbolic Species: The Coevolution of Language and the Brain explored the evolution of the human brain and how it gave rise to our language abilities. His 2012 book Incomplete Nature: How Mind Emerged from Matter explored how interrelationships between thermodynamic, self-organising, semiotic, and evolutionary processes contributed to the emergence of life, mind, and human symbolic abilities. |
16:15-16:45 |
Learning to make decisions from few examples
Humans have the ability to quickly learn new tasks, but many machine learning algorithms require enormous amounts of data to perform well. One critical arena of tasks involves decision making under uncertainty, learning from data to make good decisions to optimise expected utility. In this talk I’ll consider this challenge from a computational lens, discussing algorithms that help reveal when learning to make good decisions is easy or hard, algorithmic approaches that can change the order of how many data points are required, and multi-task learning algorithms that can automatically infer and leverage structure across tasks to substantially improve performance. Professor Emma Brunskill, Stanford University, USA
Professor Emma Brunskill, Stanford University, USAEmma Brunskill is an associate professor in the Computer Science Department at Stanford University where she and Brunskill’s lab aim to create AI systems that learn from few samples to robustly make good decisions. Their work spans algorithmic and theoretical advances to experiments, inspired and motivated by the positive impact AI might have in education and healthcare. Brunskill’s lab is part of the Stanford AI Lab, the Stanford Statistical ML group, and AI Safety @Stanford. Brunskill has received an NSF CAREER award, Office of Naval Research Young Investigator Award, a Microsoft Faculty Fellow award, and an alumni impact award from the computer science and engineering department at the University of Washington. Brunskill and her lab have received 9 multiple best paper nominations and awards for their AI and machine learning work and their work in AI for education. |
16:45-17:00 |
Discussion
|
09:00-09:05 |
Chair's introduction: Reinforcement learning and representation
|
---|---|
09:05-09:35 |
Do we even need to learn representations in machine learning?
Learning representations from raw data signals is a long standing goal of machine learning. This is underlined by the fact that one of the major conferences in the area is called International Conference on Learning Representations (ICLR). Good representations are expected to enable generalisation, compositionality and interpretability, and to serve as a bridge between the observed raw data signals and the abstract world of concepts and symbols. Representation learning is particularly important to the field of deep generative models, which has historically aimed to learn latent variables which effectively represent raw signals, as well as encoders and decoders which map between latent variables and raw signals. Recent advances in generative AI, such as language models and diffusion models, has put deep generative models in the limelight. However he will argue that these advances have been achieved by effectively giving up on the goal of representation learning. This begs the question of whether representation learning is a mirage, and what are we missing on the road to understanding intelligence? Professor Yee Whye Teh, University of Oxford and Google DeepMind, UK
Professor Yee Whye Teh, University of Oxford and Google DeepMind, UKYee-Whye Teh is a Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford and a Research Director at Google DeepMind working on AI research. He obtained his PhD at the University of Toronto (under Professor Geoffrey E Hinton), and did postdoctoral work at the University of California at Berkeley (under Professor Michael I Jordan) and National University of Singapore (as Lee Kuan Yew Postdoctoral Fellow). He was a Lecturer then a Reader at the Gatsby Computational Neuroscience Unit, University College London from January 2007- August 2012. His research interests are in machine learning and computational statistics, in particular probabilistic methods, Bayesian nonparametrics and deep learning, where he develops novel models as well as efficient algorithms for inference and learning. He is a fellow of the ELLIS Society, where he co-directs the ELLIS Programme in Robust Machine Learning and the ELLIS Unit at Oxford. |
09:35-09:50 |
Discussion
|
09:50-10:20 |
Dynamic deep learning
Deep learning and large language models have dramatically shifted the conversation about Signals vs Symbols in favour of numerical methods. Nevertheless, current deep learning methods are limited; they have great difficulty learning during their normal operation. In this talk, Sutton argues that this is not an inherent limitation of neural networks, but just of the algorithms currently used, and he proposes new neural-network algorithms specifically designed for continual learning. Professor Richard S Sutton FRS, University of Alberta, Canada
Professor Richard S Sutton FRS, University of Alberta, CanadaRichard S Sutton is research scientist at Keen Technologies, professor at the University of Alberta, chief scientific advisor of the Alberta Machine Intelligence Institute (Amii), and fellow of the Royal Society of London, the Royal Society of Canada, the Association for the Advancement of Artificial Intelligence, Amii, and CIFAR. He received a PhD in computer science from the University of Massachusetts in 1984 and a BA in psychology from Stanford University in 1978. Prior to joining the University of Alberta in 2003, he worked in industry at AT&T Labs and GTE Labs, and in academia at the University of Massachusetts. He helped found DeepMind Alberta in 2017 and worked there until its dissolution in 2023. Sutton is co-author of the textbook Reinforcement Learning: An Introduction, and his scientific publications have been cited more than 140,000 times. He is also a libertarian, a chess player, and a cancer survivor. |
10:20-10:35 |
Discussion
|
10:35-11:05 |
Break
|
11:05-11:35 |
How (formal) language can help AI agents learn, plan, and remember
Humans have evolved languages over tens of thousands of years to provide useful abstractions for understanding and interacting with each other and with the physical world. Language comes in many forms. In Computer Science and in the study of AI, we have historically used knowledge representation languages and programming languages to capture our understanding of the world and to communicate unambiguously with computers. In this talk I will discuss how (formal) language can help agents learn, plan, and remember in the context of reinforcement learning. I’ll show how we can exploit the compositional syntax and semantics of formal language and automata to aid in the specification of complex reward-worthy behaviour, to improve the sample efficiency of learning, and to help agents learn what is necessary to remember. In doing so, I argue that (formal) language can help us address some of the challenges to reinforcement learning in the real world. Professor Sheila McIlraith, University of Toronto and Vector Institute, Canada
Professor Sheila McIlraith, University of Toronto and Vector Institute, CanadaSheila McIlraith is a Professor in the Department of Computer Science at the University of Toronto, a Canada CIFAR AI Chair (Vector Institute), and an Associate Director and Research Lead at the Schwartz Reisman Institute for Technology and Society. McIlraith is the author of over 150 scholarly publications in the areas of knowledge representation, automated reasoning, and machine learning. Her work focuses on AI sequential decision making, broadly construed, through the lens of human-compatible AI. McIlraith is a fellow of the Association for Computing Machinery (ACM), and a fellow of the Association for the Advancement of Artificial Intelligence (AAAI). She and co-authors have been recognised with a number of honours for their scholarly contributions including the 2011 SWSA Ten-Year Award, the ICAPS 2022 Influential Paper Award, and the 2023 IJCAI-JAIR Best Paper Prize. |
11:35-11:50 |
Discussion
|
11:50-12:20 |
Neural models of compositional learning
Compositionality is widely regarded as a key component of general intelligence, yet its neural basis has remained elusive, largely due to the absence of plausible neural models. Recently, however, advancements in large language models (LLMs) and a resurgence of interest in recurrent neural networks (RNNs) have led to the development of models that exhibit compositional behaviours across various tasks. In this presentation, we introduce two such models. The first leverages LLMs to instruct RNNs to execute a task based solely on natural language instructions. The second model demonstrates how complex motor behaviours can be decomposed into sequences of simpler motor primitives. In both cases, adopting a compositional approach significantly reduces the learning time for tasks, even enabling 0-shot learning in scenarios where traditional reinforcement learning (RL) algorithms would require thousands or millions of training iterations. With these neural models, we now have the tools to experimentally test various hypotheses about compositionality in the brain, both in humans and animals. Professor Alexandre Pouget, University of Geneva, Switzerland
Professor Alexandre Pouget, University of Geneva, SwitzerlandAlexandre Pouget, PhD, is a full Professor at the University of Geneva in the Basis Neuroscience department. He received his undergraduate education at the Ecole Normale Supérieure, Paris, before moving to the Salk Institute in 1988 to pursue a PhD in computational neuroscience in the Sejnowski laboratory. After a postdoc at UCLA with John Schlag in 1994, he became a Professor at Georgetown University in 1996, then at the University of Rochester in the Brain and Cognitive Science department in 1999 before moving to the University of Geneva in 2011. His research focuses on general theories of representation and computation in neural circuits with a strong emphasis on neural theories of probabilistic inference. He is currently deploying this framework to a wide range of topics including olfactory processing, spatial representations, sensory motor transformations, multisensory integration, perceptual learning, attentional control, visual search and decision making. |
12:20-12:35 |
Discussion
|
Chair
Professor Tom Griffiths, Princeton University, USA
Professor Tom Griffiths, Princeton University, USA
Tom Griffiths is the Henry R Luce Professor of Information Technology, Consciousness and Culture in the Departments of Psychology and Computer Science at Princeton University. His research explores connections between human and machine learning, using ideas from statistics and artificial intelligence to understand how people solve the challenging computational problems they encounter in everyday life. Tom completed his PhD in Psychology at Stanford University in 2005 and taught at Brown University and the University of California, Berkeley before moving to Princeton. He has received awards for his research from organisations ranging from the American Psychological Association to the National Academy of Sciences and is a co-author of the book Algorithms to Live By, introducing ideas from computer science and cognitive science to a general audience.
13:30-13:35 |
Chair's introduction: Alignment
|
---|---|
13:35-14:05 |
Meta-learning as bridging the neuro-symbolic gap in AI
Meta-learning, the ability to acquire and utilise prior knowledge to facilitate new learning, is a hallmark of human and animal cognition. This capability is also evident in deep reinforcement learning agents and large language models (LLMs). While recurrent neural networks have offered insights into the neural underpinnings of learning, understanding how LLMs, trained on vast amounts of human-generated text and code, achieve rapid in-context learning remains a challenge. The inherent structure present in these training data sources—reflecting symbolic knowledge embedded in language and cultural artifacts—potentially plays a crucial role in enabling LLMs to generalise effectively. Therefore, examining the role of structured data in large-scale model training through a cognitive science lens offers crucial insights into how these models acquire and generalise knowledge, mirroring aspects of human learning. This talk will discuss how these findings not only deepen our understanding of deep learning models but also offer potential avenues for integrating machine learning and symbolic reasoning, through the lens of meta-learning and cognitive science. Insights from meta-learning research can inform the development of embodied AI agents, such as those in the Scalable, Instructable, Multiworld Agent (SIMA) project, by incorporating structured knowledge representations and meta-learning capabilities to potentially enhance their ability to follow instructions, generalise to novel tasks, and interact more effectively within many complex 3D environments. Dr Jane X Wang, Google DeepMind, UK
Dr Jane X Wang, Google DeepMind, UKJane is a staff research scientist at Google DeepMind, where she works to create and understand novel approaches for learning and meta-learning in a reinforcement learning context, inspired by the latest advancements in cognitive neuroscience. She obtained her PhD in Applied Physics from the University of Michigan, studying complex systems, physics, and computational neuroscience, before moving on to conduct research in cognitive neuroscience at Northwestern University. She has published in top-tier journals such as Science, Nature Neuroscience, and Neuron, as well as major AI/ML conferences such as NeurIPS, ICML, and ICLR. |
14:05-14:20 |
Discussion
|
14:20-14:50 |
The Habermas Machine: using AI to help people find common ground
Language models allow us to treat text as data. This opens up new opportunities for human communication, deliberation, and debate. I will describe a project in which we use an LLM to help people find agreement, by training it to produce statements about political issues that a group with diverse views will endorse. We find that the statements it produces help people find common ground, and shift their views towards a shared stance on the issue. By analysing embeddings, we show that the group statements respect the majority view but prominently include dissenting voices. We use the tool to mount a virtual citizens’ assembly and show that independent groups debating political issues relevant to the UK move in a common direction. We call this AI system the “Habermas Machine”, after the theorist Jurgen Habermas, who proposed that when rational people debate under idealised conditions, agreement will emerge in the public sphere. Professor Christopher Summerfield, University of Oxford and UK AI Safety Institute, UK
Professor Christopher Summerfield, University of Oxford and UK AI Safety Institute, UKChristopher Summerfield is a researcher at the University of Oxford, where his work focusses on understanding the computational, cognitive and neural mechanisms underlying human learning and decision-making. He is also a Research Director at the UK AI Safety Institute, where he studies the impact of AI on society. |
14:50-15:05 |
Discussion
|
15:05-15:35 |
Break
|
15:35-16:05 |
The emerging science of benchmarks
Benchmarks have played a central role in the progress of machine learning research since the 1980s. Although there's much researchers have done with them, we still know little about how and why benchmarks work. In this talk, I will trace the rudiments of an emerging science of benchmarks through selected empirical and theoretical observations. Looking back at the ImageNet era, I'll discuss what we learned about the validity of model rankings and the role of label errors. Looking ahead, I'll talk about new challenges to benchmarking and evaluation in the era of large language models. The results we'll encounter challenge conventional wisdom and underscore the benefits of developing a science of benchmarks. Professor Moritz Hardt, Max Planck Institute of Intelligent Systems, Germany
Professor Moritz Hardt, Max Planck Institute of Intelligent Systems, GermanyHardt is a director at the Max Planck Institute for Intelligent Systems, Tübingen. Previously, he was Associate Professor for Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research contributes to the scientific foundations of machine learning and algorithmic decision making with a focus on social questions. He co-authored Fairness and Machine Learning: Limitations and Opportunities (MIT Press) and Patterns, Predictions, and Actions: Foundations of Machine Learning (Princeton University Press). |
16:05-16:20 |
Discussion
|
16:20-17:00 |
Panel discussion
|