Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.
Cognitive artificial intelligence
Scientific discussion meeting organised by Professor Alan Bundy CBE FREng FRS, Professor Stephen Muggleton FREng, Professor Nick Chater, Professor Josh Tenenbaum, Professor Ulrike Hahn and Assistant Professor Ellie Pavlick.
In recent years there has been increasing excitement concerning the potential of Artificial Intelligence to transform human society. The meeting addressed present incompatibilities and similarities of human and machine reasoning and learning approaches, and involved presentations of relevant leading research in both Artificial Intelligence and Cognitive Science.
Recordings of the talks are now available below. An accompanying journal issue has been published in Philosophical Transactions of the Royal Society A.
Attending this event
This meeting has taken place.
Enquiries: contact the Scientific Programmes team
Organisers
Schedule
Chair
Professor Stephen Muggleton FREng, Imperial College London, UK
Professor Stephen Muggleton FREng, Imperial College London, UK
Professor Stephen Muggleton FREng (SM) is founder of the field of Inductive Logic Programming to which he has made contributions in theory, implementations and applications. SM has over 200 publications, and has a Google Scholar h-index of 73. He has been Executive Editor of the Machine Intelligence workshop series since 1992 and Editor-in-Chief of the series since 2000. In particular, he acted as Programme Chair of the Machine Intelligence 20 and 21 workshop on Human-Like Computing (Cumberland Lodge, 23 – 25 October 2016; 30 June – 3 July 2019) and was co-editor of the book Human-Like Machine Learning (OUP, 2021). SM founded the Machine Learning group at the University of Oxford in 1993 and went on to become Oxford University Reader in Machine Learning and later Professor of Machine Learning at the University of York in 1997 and then EPSRC/BBSRC Professor of Computational Bioinformatics at Imperial College London in 2001. His work has received widespread international recognition as evidenced by his election to the fellowship of a number of learned bodies: FAAAI (2002), FBCS (2008), FIET (2008), FREng (2010), FRSB (2011) and FECCAI (2014). In 2016 SM acted as a witness to the UK Government’s Select Committee on Artificial Intelligence and Robotics.
Professor Sharon Goldwater, University of Edinburgh, UK
Professor Sharon Goldwater, University of Edinburgh, UK
Sharon Goldwater is a Professor in the Institute for Language, Cognition and Computation at the University of Edinburgh's School of Informatics. Her research interests include unsupervised and minimally-supervised learning for speech and language processing, computer modelling of language acquisition in children, and computational studies of language use. Professor Goldwater has sat on the editorial boards of several journals, including Computational Linguistics, Transactions of the Association for Computational Linguistics, and OPEN MIND: Advances in Cognitive Science. She was the 2016 recipient of the Roger Needham Award from the British Computer Society for "distinguished research contribution in computer science by a UK-based researcher who has completed up to 10 years of post-doctoral research” and was chair of the European Chapter of the Association for Computational Linguistics (EACL) from 2019–2020.
Image credit: stevencookpictures
08:00 - 08:05 | Welcome |
---|---|
08:05 - 08:30 |
How could we make a social robot? A virtual bargaining approach
What is required to allow an artificial agent to engage in rich, human-like interactions with people? The author argues that this will require capturing the process by which humans continually create and renegotiate “bargains” with each other, about who should do what in a particular interaction, which actions are allowed or forbidden, and the momentary conventions governing communication, including language. But such bargains are far too numerous, and social interactions too rapid, to be stated explicitly. Moreover, the very process of communication presupposes innumerable momentary agreements, thus raising the threat of circularity. Thus, the improvised “social contracts” that govern people's interactions must be implicit. The author draws on the recent theory of virtual bargaining, according to which social partners mentally simulate a process of negotiation, to outline how these implicit agreements can be made, and note that viewpoint raises substantial theoretical and computational challenges. Nonetheless, the author suggests that these challenges must be met to create artificial intelligence systems that can work collaboratively alongside people, rather than being helpful special-purpose computational tools. Professor Nick Chater FBA, University of Warwick, UK
Professor Nick Chater FBA, University of Warwick, UKNick Chater is Professor of Behavioural Science at Warwick Business School. He works on the cognitive and social foundations of rationality and language, and applications to business and government policy. His main current interest is understanding how incredibly rich and rationally-justifiable social products, including language, laws, markets and science, can sometimes emerge from the interactions of very partially rational individuals. He has served as Associate Editor for the journals Cognitive Science, Psychological Review, and Psychological Science. Nick is co-founder of the research consultancy Decision Technology and is a former member of the UK’s Climate Change Committee. He is the author of The Mind is Flat (2018), and The Language Game (2022, with Morten Christiansen). He is a Fellow of the British Academy, and the winner of the 2023 Rumelhart Prize.
|
08:30 - 08:45 | Discussion |
08:45 - 09:15 |
Representational change is integral to reasoning
Artificial Intelligence emulations of reasoning have traditionally employed a representation of an environment consisting of a set of rules and facts expressed in a symbolic language. Goals are also expressed in this language and the rules and facts are used to solve them. For instance, a robot might use a representation of its environment to form plans to achieve goals. The group argues that successful reasoning requires the representation to be fluid. Not only might the facts describing the environment need to change as the environment changes, but the rules might also change. Moreover, the language in which the rules and facts are expressed, may also have to change. For instance, a rule might acquire a new precondition when the old rule is discovered to hold only in limited circumstances. A concept in the language might also need to be divided into variants together with corresponding variants of the rules and facts that use it. Mathematics provides the toughest test of this thesis. Surely, its language, rules and facts remain stable during the proof of a theorem. Not so. George Pólya has written the classic guide to the art of problem-solving (Pólya, 1945). Imre Lakatos has written a fascinating rational reconstruction of the evolution of mathematical methodology (Lakatos, 1976). Although it was not their intention to do so, both these authors have implicitly provided profound evidence for our thesis that representations should be fluid. Professor Alan Bundy CBE FREng FRS, University of Edinburgh, UK
Professor Alan Bundy CBE FREng FRS, University of Edinburgh, UKAlan Bundy is Professor of Automated Reasoning at the University of Edinburgh. His research interests include: the automation of mathematical reasoning and the automatic construction, analysis and evolution of representations of knowledge. He is a fellow of the Royal Society, the Royal Academy of Engineering and the Association for Computing Machinery. He was awarded the IJCAI Research Excellence Award (2007), the CADE Herbrand Award (2007) and a CBE (2012). He was Edinburgh's Head of Informatics (1998-2001) and a member of: the ITEC Foresight Panel (1994-96); the Hewlett-Packard Research Board (1989-91); both the 2001 and 2008 Computer Science RAE panels (1999-2001, 2005-2008); and the Scottish Science Advisory Council (2008-12). He was the founding Convener of UKCRC (2000-2005) and a Vice President and Trustee of the British Computer Society with special responsibility for the Academy of Computing (2010-12). He is the author of over 300 publications. |
09:15 - 09:30 | Discussion |
09:30 - 10:00 | Coffee |
10:00 - 10:30 |
Learning grounded and symbolic word representations with neural networks
This talk will discuss the potential of neural network models to learn grounded and structured lexical concepts. The author will give an overview of two recent lines of work. The first asks whether neural networks trained to model the physical world can learn representations of concepts that align well to human language. The second asks whether neural networks conceptual representations are compositional and structured in the way we expect human representations to be. Assistant Professor Ellie Pavlick, Brown University, USA
Assistant Professor Ellie Pavlick, Brown University, USAEllie Pavlick is an Assistant Professor of Computer Science at Brown University, where she leads the Language Understanding and Representation (LUNAR) Lab, and a Research Scientist at Google. Her research focuses on building computational models of language that are inspired by and/or informative of language processing in humans. Currently, her lab is investigating the inner-workings of neural networks in order to 'reverse engineer' the conceptual structures and reasoning strategies that these models use, as well as exploring the role of grounded (non-linguistic) signals for word and concept learning. Ellie's work is supported by DARPA, IARPA, NSF, and Google. |
10:30 - 10:45 | Discussion |
10:45 - 11:15 |
DeepLog: reconciling machine learning and human bias
Statistical machine learning typically achieves high accuracy models by employing tens of thousands of examples. By contrast, both children and Professor Stephen Muggleton FREng, Imperial College London, UK
Professor Stephen Muggleton FREng, Imperial College London, UKProfessor Stephen Muggleton FREng (SM) is founder of the field of Inductive Logic Programming to which he has made contributions in theory, implementations and applications. SM has over 200 publications, and has a Google Scholar h-index of 73. He has been Executive Editor of the Machine Intelligence workshop series since 1992 and Editor-in-Chief of the series since 2000. In particular, he acted as Programme Chair of the Machine Intelligence 20 and 21 workshop on Human-Like Computing (Cumberland Lodge, 23 – 25 October 2016; 30 June – 3 July 2019) and was co-editor of the book Human-Like Machine Learning (OUP, 2021). SM founded the Machine Learning group at the University of Oxford in 1993 and went on to become Oxford University Reader in Machine Learning and later Professor of Machine Learning at the University of York in 1997 and then EPSRC/BBSRC Professor of Computational Bioinformatics at Imperial College London in 2001. His work has received widespread international recognition as evidenced by his election to the fellowship of a number of learned bodies: FAAAI (2002), FBCS (2008), FIET (2008), FREng (2010), FRSB (2011) and FECCAI (2014). In 2016 SM acted as a witness to the UK Government’s Select Committee on Artificial Intelligence and Robotics. |
11:15 - 11:30 | Discussion |
Chair
Professor Alan Bundy CBE FREng FRS, University of Edinburgh, UK
Professor Alan Bundy CBE FREng FRS, University of Edinburgh, UK
Alan Bundy is Professor of Automated Reasoning at the University of Edinburgh. His research interests include: the automation of mathematical reasoning and the automatic construction, analysis and evolution of representations of knowledge. He is a fellow of the Royal Society, the Royal Academy of Engineering and the Association for Computing Machinery. He was awarded the IJCAI Research Excellence Award (2007), the CADE Herbrand Award (2007) and a CBE (2012). He was Edinburgh's Head of Informatics (1998-2001) and a member of: the ITEC Foresight Panel (1994-96); the Hewlett-Packard Research Board (1989-91); both the 2001 and 2008 Computer Science RAE panels (1999-2001, 2005-2008); and the Scottish Science Advisory Council (2008-12). He was the founding Convener of UKCRC (2000-2005) and a Vice President and Trustee of the British Computer Society with special responsibility for the Academy of Computing (2010-12). He is the author of over 300 publications.
Dr Hyowon Gweon, Stanford University, USA
Dr Hyowon Gweon, Stanford University, USA
Hyowon (Hyo) Gweon (she/her) is an Associate Professor in the Department of Psychology at Stanford University. She has been named as a Richard E. Guggenhime Faculty Scholar (2020) and a David Huntington Dean's Faculty Scholar (2019), and currently serves as the Director of Graduate Studies for the Department of Psychology and the Symbolic Systems Program. Hyo received her PhD in Cognitive Science (2012) from MIT, where she continued as a postdoc before joining Stanford in 2014.
Hyo is broadly interested in how humans learn from others and help others learn. Taking an interdisciplinary approach that combines developmental, computational, and neuroimaging methods, her research aims to explain the cognitive underpinnings of distinctively human learning, communication, and prosocial behaviours.
Awards and honors include: CDS Steve Reznick Early Career Award (2022), APS Janet Spence Award for Transformative Early Career Contributions (2020), Jacobs Early Career Fellowship (2020), James S. McDonnell Scholar Award for Human Cognition (2018), APA Dissertation Award (2014), Marr Prize (best student paper, Cognitive Science Society 2010)
12:30 - 13:00 |
Argument and explanation
The talk will bring together two closely related, but distinct, notions: argument and explanation. First, it will clarify their relationship and then provide an overview of relevant research on these notions, drawn both from the cognitive science and the AI literatures. This will then be used to identify key directions for future research, indicating areas where bringing together cognitive science and AI perspectives would be mutually beneficial. Professor Ulrike Hahn, Birkbeck, University of London, UK
Professor Ulrike Hahn, Birkbeck, University of London, UKUlrike Hahn is a professor at the Department of Psychological Sciences at Birkbeck College, University of London, where she leads the Centre for Cognition, Computation and Modelling. Ulrike Hahn’s research focusses on human rationality, and examines human judgment, decision-making, the rationality of everyday argument, and the role of perceived source reliability for our beliefs, including our beliefs as parts of larger communicative social networks. She was awarded the Cognitive Section Prize by the British Psychological Society, the Kerstin Hesselgren Professorship by the Swedish Research Council, and the Anneliese Maier Research Award by the Alexander von Humboldt Foundation. She is a Fellow of the German National Academy of Science (Leopoldina), a fellow of the Association for Psychological Science, a corresponding member of the Bayerische Akademie der Wissenschaften, and she holds an honorary doctorate from Lund University, Sweden. |
---|---|
13:00 - 13:15 | Discussion |
13:15 - 13:45 |
Language use as social reasoning
Our use of language goes far beyond a simple decoding of the literal meaning of the speech signal. Philosophers have argued instead that language use involves complex reasoning about the knowledge and intentions of interlocutors. Yet language use is fast and effortless. How can language depend on complex reasoning while being so easy? In this talk Associate Professor Goodman will explore this question using both structured and autoregressive probabilistic models, considering cases including implicature and metaphor. Noah Goodman, Associate Professor, Stanford University
Noah Goodman, Associate Professor, Stanford UniversityNoah D. Goodman is Associate Professor of Psychology and Computer Science at Stanford University. He studies the computational basis of human and machine intelligence, merging behavioral experiments with formal methods from statistics, machine learning, and programming languages. His research topics include language understanding, social reasoning, concept learning, and natural pedagogy. In addition he explores related technologies such as probabilistic programming languages and deep generative models. He has released open-source software including the probabilistic programming languages Church, WebPPL, and Pyro. Professor Goodman received his Ph.D. in mathematics from the University of Texas at Austin in 2003. In 2005 he entered cognitive science, working as Postdoc and Research Scientist at MIT. In 2010 he moved to Stanford where he runs the Computation and Cognition Lab. His work has been recognized by the J. S. McDonnell Foundation Scholar Award, the Roger N. Shepard Distinguished Visiting Scholar Award, the Alfred P. Sloan Research Fellowship in Neuroscience, seven computational modeling prizes from the Cognitive Science Society, and best paper awards from AAAI, EDM, and other venues. |
13:45 - 14:00 | Discussion |
14:00 - 14:30 | Tea |
14:30 - 15:00 | Short talks |
Chair
Dr Hyowon Gweon, Stanford University, USA
Dr Hyowon Gweon, Stanford University, USA
Hyowon (Hyo) Gweon (she/her) is an Associate Professor in the Department of Psychology at Stanford University. She has been named as a Richard E. Guggenhime Faculty Scholar (2020) and a David Huntington Dean's Faculty Scholar (2019), and currently serves as the Director of Graduate Studies for the Department of Psychology and the Symbolic Systems Program. Hyo received her PhD in Cognitive Science (2012) from MIT, where she continued as a postdoc before joining Stanford in 2014.
Hyo is broadly interested in how humans learn from others and help others learn. Taking an interdisciplinary approach that combines developmental, computational, and neuroimaging methods, her research aims to explain the cognitive underpinnings of distinctively human learning, communication, and prosocial behaviours.
Awards and honors include: CDS Steve Reznick Early Career Award (2022), APS Janet Spence Award for Transformative Early Career Contributions (2020), Jacobs Early Career Fellowship (2020), James S. McDonnell Scholar Award for Human Cognition (2018), APA Dissertation Award (2014), Marr Prize (best student paper, Cognitive Science Society 2010)
Assistant Professor Ellie Pavlick, Brown University, USA
Assistant Professor Ellie Pavlick, Brown University, USA
Ellie Pavlick is an Assistant Professor of Computer Science at Brown University, where she leads the Language Understanding and Representation (LUNAR) Lab, and a Research Scientist at Google. Her research focuses on building computational models of language that are inspired by and/or informative of language processing in humans. Currently, her lab is investigating the inner-workings of neural networks in order to 'reverse engineer' the conceptual structures and reasoning strategies that these models use, as well as exploring the role of grounded (non-linguistic) signals for word and concept learning. Ellie's work is supported by DARPA, IARPA, NSF, and Google.
08:00 - 08:30 |
Doing for our robots what nature did for us
We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in 'the factory' (that is, at engineering time) and in 'the wild' (that is, when the robot is delivered to a customer). Professor Pack Kaebling will share some general thoughts about the strategies for robot design and then talk in detail about some work they have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot. Professor Leslie Pack Kaelbling, MIT, USA
Professor Leslie Pack Kaelbling, MIT, USALeslie is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founding editor in chief of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning. She is not a robot. |
---|---|
08:30 - 08:45 | Discussion |
08:45 - 09:15 |
Language learning in humans and machines
Why consider connections between language acquisition in humans and machines? The author argues that there are at least two reasons. On the one hand, developments in machine learning can potentially provide hypotheses or insights (and ideally, testable predictions) regarding human language acquisition. On the other hand, data and methods from behavioural experiments can be used to better understand the current limitations of engineered systems. To illustrate these two directions, the author discusses examples from her own work, focusing on early speech perception and morphological generalization. Professor Sharon Goldwater, University of Edinburgh, UK
Professor Sharon Goldwater, University of Edinburgh, UKSharon Goldwater is a Professor in the Institute for Language, Cognition and Computation at the University of Edinburgh's School of Informatics. Her research interests include unsupervised and minimally-supervised learning for speech and language processing, computer modelling of language acquisition in children, and computational studies of language use. Professor Goldwater has sat on the editorial boards of several journals, including Computational Linguistics, Transactions of the Association for Computational Linguistics, and OPEN MIND: Advances in Cognitive Science. She was the 2016 recipient of the Roger Needham Award from the British Computer Society for "distinguished research contribution in computer science by a UK-based researcher who has completed up to 10 years of post-doctoral research” and was chair of the European Chapter of the Association for Computational Linguistics (EACL) from 2019–2020. Image credit: stevencookpictures |
09:15 - 09:30 | Discussion |
09:30 - 10:00 | Coffee |
10:00 - 10:30 |
Computational models of emotion prediction
Professor Rebecca Saxe, MIT, USA
Professor Rebecca Saxe, MIT, USARebecca Saxe studies how people think about people. She is best known for her research on ’Theory of Mind’, using brain imaging in human adults and children, and on the origins of social cognition in human infants. Her TED talk about Theory of Mind has been viewed more than 3.2 million times and translated into 33 languages; and her image of Mother and Child in MRI was published in Smithsonian Magazine, and has since become iconic. She was a 2020 Guggenheim Fellow, and was awarded the 2018 MIT Committed to Caring award for graduate mentorship. She is committed to improving transparency, rigour and replicability in science, through her service on the board of the Center for Open Science, and in her role as Associate Dean of Science at MIT. |
10:30 - 10:45 | Discussion |
10:45 - 11:15 |
Towards socially intelligent machines: thinking, learning, and communicating about the self
Humans are not the only species that learn from and interact with others, but only humans actively involve their conspecifics in the process of acquiring abstract knowledge and navigating social norms that regulate, evaluate, and shape their own and others’ behaviours. What makes human social intelligence so distinctive, powerful, and smart? In this talk, the author will present a series of studies that reveal the remarkably curious minds of young children, not only about the physical world but also about others and themselves. Children are curious about what others do and what their actions mean, what others know and what they ought to know, and even what others think of them and how to change their beliefs. Going beyond the idea that young children are like scientists who explore and learn about the external world, these results demonstrate how human social intelligence supports thinking, learning, and communicating about the self in a range of social contexts. These findings will be discussed in light of the recent interest in building socially intelligent machines that can interact and communicate with humans. Dr Hyowon Gweon, Stanford University, USA
Dr Hyowon Gweon, Stanford University, USAHyowon (Hyo) Gweon (she/her) is an Associate Professor in the Department of Psychology at Stanford University. She has been named as a Richard E. Guggenhime Faculty Scholar (2020) and a David Huntington Dean's Faculty Scholar (2019), and currently serves as the Director of Graduate Studies for the Department of Psychology and the Symbolic Systems Program. Hyo received her PhD in Cognitive Science (2012) from MIT, where she continued as a postdoc before joining Stanford in 2014. Hyo is broadly interested in how humans learn from others and help others learn. Taking an interdisciplinary approach that combines developmental, computational, and neuroimaging methods, her research aims to explain the cognitive underpinnings of distinctively human learning, communication, and prosocial behaviours. Awards and honors include: CDS Steve Reznick Early Career Award (2022), APS Janet Spence Award for Transformative Early Career Contributions (2020), Jacobs Early Career Fellowship (2020), James S. McDonnell Scholar Award for Human Cognition (2018), APA Dissertation Award (2014), Marr Prize (best student paper, Cognitive Science Society 2010) |
11:15 - 11:30 | Discussion |
Chair
Professor Leslie Pack Kaelbling, MIT, USA
Professor Leslie Pack Kaelbling, MIT, USA
Leslie is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founding editor in chief of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning. She is not a robot.
12:30 - 13:00 |
Understanding computational dialogue understanding
Fifty years ago, in 1972, Terry Winograd published his seminal MIT dissertation as a book with the title Understanding Natural Language. His dialogue system SHRDLU was the first comprehensive AI system that modelled language understanding in an integrated way with all semiotic aspects of language: syntax, semantics and pragmatics combined with inference capabilities in a very simple model of the domain of discourse. Although from a cognitive modelling perspective various systems following this early paradigm covered quite sophisticated aspects of dialogue understanding, they had one huge disadvantage: they did not a scale to open domains. Today statistical language models are becoming more capable than ever before and are helpful to realize scalable dialogue systems in open-domains. Professor Wahlster reviews Google’s recent LaMDA (Language Models for Dialogue Applications) system that was built by fine-tuning Transformer-based neural language models trained on 1.56 trillion words with up to 137 billion model parameters. But these systems have another huge disadvantage: they behave like stochastic parrots, have no explicit representation of the communicative intent of their utterances and are not able to deal with complex turn-taking in multi-party dialogues. In this talk, Professor Wahlster will show that a human-like dialogue system must not only understand and represent the user’s input, but also its own output. They argue that only the combined muscle of data-driven deep learning and model-based deep understanding approaches will ultimately lead to human-like dialogue systems, since natural language understanding is AI-complete and must include inductive, deductive and abductive reasoning methods. Professor Wolfgang Wahlster, German Research Center for Artificial Intelligence DFKI, Germany
Professor Wolfgang Wahlster, German Research Center for Artificial Intelligence DFKI, GermanyWolfgang Wahlster is a Professor of Artificial Intelligence (AI) and the founding director of the German Research Center for Artificial Intelligence (DFKI). Wahlster is a member of the Nobel Prize Academy in Stockholm, the German National Academy Leopoldina and three other prestigious academies. He laid some of the foundations for natural language dialogue systems, user modelling, speech-to-speech translation, and multimodal discourse understanding. He is an elected Fellow of AAAI and EurAI and has served as an elected President of three international AI organizations: IJCAII, EurAI, and ACL. For his research, he has been awarded the German Future Prize, the First Class Cross of Merit and the Grand Cross of Merit by the Federal President of Germany. Other awards include five honorary doctorates from universities in Darmstadt, Linkoeping, Maastricht, Prague and Oldenburg. He is a member of the steering board of the German government’s AI strategy platform. |
---|---|
13:00 - 13:15 | Discussion |
13:15 - 13:45 |
Professor Josh Tenenbaum, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
Abstract will be available soon |
13:45 - 14:00 | Discussion |
14:00 - 14:30 | Tea |
14:30 - 16:00 | Panel discussion |