This page is archived

Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.

Cognitive artificial intelligence

26 - 27 September 2022 08:00 - 16:00

Scientific discussion meeting organised by Professor Alan Bundy CBE FREng FRS, Professor Stephen Muggleton FREng, Professor Nick Chater, Professor Josh Tenenbaum, Professor Ulrike Hahn and Assistant Professor Ellie Pavlick.

In recent years there has been increasing excitement concerning the potential of Artificial Intelligence to transform human society. The meeting addressed present incompatibilities and similarities of human and machine reasoning and learning approaches, and involved presentations of relevant leading research in both Artificial Intelligence and Cognitive Science.

Recordings of the talks are now available below. An accompanying journal issue has been published in Philosophical Transactions of the Royal Society A.

Attending this event

This meeting has taken place.

Enquiries: contact the Scientific Programmes team


 

Organisers

  • Professor Alan Bundy CBE FREng FRS, University of Edinburgh, UK

    Alan Bundy is Professor of Automated Reasoning at the University of Edinburgh. His research interests include: the automation of mathematical reasoning and the automatic construction, analysis and evolution of representations of knowledge. He is a fellow of the Royal Society, the Royal Academy of Engineering and the Association for Computing Machinery. He was awarded the IJCAI Research Excellence Award (2007), the CADE Herbrand Award (2007) and a CBE (2012). He was Edinburgh's Head of Informatics (1998-2001) and a member of: the ITEC Foresight Panel (1994-96); the Hewlett-Packard Research Board (1989-91); both the 2001 and 2008 Computer Science RAE panels (1999-2001, 2005-2008); and the Scottish Science Advisory Council (2008-12). He was the founding Convener of UKCRC (2000-2005) and a Vice President and Trustee of the British Computer Society with special responsibility for the Academy of Computing (2010-12).   He is the author of over 300 publications.

  • Professor Stephen Muggleton FREng, Imperial College London, UK

    Professor Stephen Muggleton FREng (SM) is founder of the field of Inductive Logic Programming to which he has made contributions in theory, implementations and applications. SM has over 200 publications, and has a Google Scholar h-index of 73. He has been Executive Editor of the Machine Intelligence workshop series since 1992 and Editor-in-Chief of the series since 2000. In particular, he acted as Programme Chair of the Machine Intelligence 20 and 21 workshop on Human-Like Computing (Cumberland Lodge, 23 – 25 October 2016; 30 June – 3 July 2019) and was co-editor of the book Human-Like Machine Learning (OUP, 2021). SM founded the Machine Learning group at the University of Oxford in 1993 and went on to become Oxford University Reader in Machine Learning and later Professor of Machine Learning at the University of York in 1997 and then EPSRC/BBSRC Professor of Computational Bioinformatics at Imperial College London in 2001. His work has received widespread international recognition as evidenced by his election to the fellowship of a number of learned bodies: FAAAI (2002), FBCS (2008), FIET (2008), FREng (2010), FRSB (2011) and FECCAI (2014). In 2016 SM acted as a witness to the UK Government’s Select Committee on Artificial Intelligence and Robotics.

  • Professor Nick Chater FBA, University of Warwick, UK

    Nick Chater is Professor of Behavioural Science at Warwick Business School. He works on the cognitive and social foundations of rationality and language, and applications to business and government policy. His main current interest is understanding how incredibly rich and rationally-justifiable social products, including language, laws, markets and science, can sometimes emerge from the interactions of very partially rational individuals. He has served as Associate Editor for the journals Cognitive Science, Psychological Review, and Psychological Science.  Nick is co-founder of the research consultancy Decision Technology and is a former member of the UK’s Climate Change Committee. He is the author of The Mind is Flat (2018), and The Language Game (2022, with Morten Christiansen). He is a Fellow of the British Academy, and the winner of the 2023 Rumelhart Prize. 

     

  • Professor Josh Tenenbaum, MIT, USA

    Biography not available

  • Professor Ulrike Hahn, Birkbeck, University of London, UK

    Ulrike Hahn is a professor at the Department of Psychological Sciences at Birkbeck College, University of London, where she leads the Centre for Cognition, Computation and Modelling. Ulrike Hahn’s research focusses on human rationality, and examines human judgment, decision-making, the rationality of everyday argument, and the role of perceived source reliability for our beliefs, including our beliefs as parts of larger communicative social networks. She was awarded the Cognitive Section Prize by the British Psychological Society, the Kerstin Hesselgren Professorship by the Swedish Research Council, and the Anneliese Maier Research Award by the Alexander von Humboldt Foundation. She is a Fellow of the German National Academy of Science (Leopoldina), a fellow of the Association for Psychological Science, a corresponding member of the Bayerische Akademie der Wissenschaften, and she holds an honorary doctorate from Lund University, Sweden.

  • Assistant Professor Ellie Pavlick, Brown University, USA

    Ellie Pavlick is an Assistant Professor of Computer Science at Brown University, where she leads the Language Understanding and Representation (LUNAR) Lab, and a Research Scientist at Google. Her research focuses on building computational models of language that are inspired by and/or informative of language processing in humans. Currently, her lab is investigating the inner-workings of neural networks in order to 'reverse engineer' the conceptual structures and reasoning strategies that these models use, as well as exploring the role of grounded (non-linguistic) signals for word and concept learning. Ellie's work is supported by DARPA, IARPA, NSF, and Google.

Schedule

Chair

Professor Stephen Muggleton FREng, Imperial College London, UK

Professor Sharon Goldwater, University of Edinburgh, UK

08:00 - 08:05 Welcome
08:05 - 08:30 How could we make a social robot? A virtual bargaining approach

What is required to allow an artificial agent to engage in rich, human-like interactions with people? The author argues that this will require capturing the process by which humans continually create and renegotiate “bargains” with each other, about who should do what in a particular interaction, which actions are allowed or forbidden, and the momentary conventions governing communication, including language. But such bargains are far too numerous, and social interactions too rapid, to be stated explicitly. Moreover, the very process of communication presupposes innumerable momentary agreements, thus raising the threat of circularity. Thus, the improvised “social contracts” that govern people's interactions must be implicit. The author draws on the recent theory of virtual bargaining, according to which social partners mentally simulate a process of negotiation, to outline how these implicit agreements can be made, and note that viewpoint raises substantial theoretical and computational challenges. Nonetheless, the author suggests that these challenges must be met to create artificial intelligence systems that can work collaboratively alongside people, rather than being helpful special-purpose computational tools.

Professor Nick Chater FBA, University of Warwick, UK

08:30 - 08:45 Discussion
08:45 - 09:15 Representational change is integral to reasoning

Artificial Intelligence emulations of reasoning have traditionally employed a representation of an environment consisting of a set of rules and facts expressed in a symbolic language. Goals are also expressed in this language and the rules and facts are used to solve them. For instance, a robot might use a representation of its environment to form plans to achieve goals.

The group argues that successful reasoning requires the representation to be fluid. Not only might the facts describing the environment need to change as the environment changes, but the rules might also change. Moreover, the language in which the rules and facts are expressed, may also have to change. For instance, a rule might acquire a new precondition when the old rule is discovered to hold only in limited circumstances. A concept in the language might also need to be divided into variants together with corresponding variants of the rules and facts that use it. 

Mathematics provides the toughest test of this thesis. Surely, its language, rules and facts remain stable during the proof of a theorem. Not so. George Pólya has written the classic guide to the art of problem-solving (Pólya, 1945). Imre Lakatos has written a fascinating rational reconstruction of the evolution of mathematical methodology (Lakatos, 1976). Although it was not their intention to do so, both these authors have implicitly provided profound evidence for our thesis that representations should be fluid. 

 

Professor Alan Bundy CBE FREng FRS, University of Edinburgh, UK

09:15 - 09:30 Discussion
09:30 - 10:00 Coffee
10:00 - 10:30 Learning grounded and symbolic word representations with neural networks

This talk will discuss the potential of neural network models to learn grounded and structured lexical concepts. The author will give an overview of two recent lines of work. The first asks whether neural networks trained to model the physical world can learn representations of concepts that align well to human language. The second asks whether neural networks conceptual representations are compositional and structured in the way we expect human representations to be.

Assistant Professor Ellie Pavlick, Brown University, USA

10:30 - 10:45 Discussion
10:45 - 11:15 DeepLog: reconciling machine learning and human bias

Statistical machine learning typically achieves high accuracy models by employing tens of thousands of examples. By contrast, both children and
adult humans typically learn new concepts from either one or a small number of instances. The high data efficiency of human learning is not easily explained in terms of standard formal frameworks for machine learning, including Gold's Learning-in-the-Limit framework and Valiant's Probably-Approximately Correct (PAC) model. This presentation will explore ways in which this apparent disparity between human and machine learning can be reconciled by considering algorithms involving a preference for specificity combined with program minimality. The talk will show how this can be efficiently enacted using hierarchical search based on the use of certificates, logical matrices and pushdown automata to support the learning of compactly expressed maximal efficiency algorithms. Early results of a new system called DeepLog indicate that such approaches can support efficient top-down identification of relatively complex logic programs from small numbers of examples.

 

Professor Stephen Muggleton FREng, Imperial College London, UK

11:15 - 11:30 Discussion

Chair

Professor Alan Bundy CBE FREng FRS, University of Edinburgh, UK

Dr Hyowon Gweon, Stanford University, USA

12:30 - 13:00 Argument and explanation

The talk will bring together two closely related, but distinct, notions: argument and explanation. First, it will clarify their relationship and  then provide an overview of relevant research on these notions, drawn both from the cognitive science and the AI literatures. This will then be used to identify key directions for future research, indicating areas where bringing together cognitive science and AI perspectives would be mutually beneficial.

Professor Ulrike Hahn, Birkbeck, University of London, UK

13:00 - 13:15 Discussion
13:15 - 13:45 Language use as social reasoning

Our use of language goes far beyond a simple decoding of the literal meaning of the speech signal. Philosophers have argued instead that language use involves complex reasoning about the knowledge and intentions of interlocutors. Yet language use is fast and effortless. How can language depend on complex reasoning while being so easy? In this talk Associate Professor Goodman will explore this question using both structured and autoregressive probabilistic models, considering cases including implicature and metaphor.

Noah Goodman, Associate Professor, Stanford University

13:45 - 14:00 Discussion
14:00 - 14:30 Tea
14:30 - 15:00 Short talks

Chair

Dr Hyowon Gweon, Stanford University, USA

Assistant Professor Ellie Pavlick, Brown University, USA

08:00 - 08:30 Doing for our robots what nature did for us

We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in 'the factory' (that is, at engineering time) and in 'the wild' (that is, when the robot is delivered to a customer). Professor Pack Kaebling will share some general thoughts about the strategies for robot design and then talk in detail about some work they have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

Professor Leslie Pack Kaelbling, MIT, USA

08:30 - 08:45 Discussion
08:45 - 09:15 Language learning in humans and machines

Why consider connections between language acquisition in humans and machines? The author argues that there are at least two reasons. On the one hand, developments in machine learning can potentially provide hypotheses or insights (and ideally, testable predictions) regarding human language acquisition. On the other hand, data and methods from behavioural experiments can be used to better understand the current limitations of engineered systems. To illustrate these two directions, the author discusses examples from her own work, focusing on early speech perception and morphological generalization. 

Professor Sharon Goldwater, University of Edinburgh, UK

09:15 - 09:30 Discussion
09:30 - 10:00 Coffee
10:00 - 10:30 Computational models of emotion prediction

Professor Rebecca Saxe, MIT, USA

10:30 - 10:45 Discussion
10:45 - 11:15 Towards socially intelligent machines: thinking, learning, and communicating about the self

Humans are not the only species that learn from and interact with others, but only humans actively involve their conspecifics in the process of acquiring abstract knowledge and navigating social norms that regulate, evaluate, and shape their own and others’ behaviours. What makes human social intelligence so distinctive, powerful, and smart?  In this talk, the author will present a series of studies that reveal the remarkably curious minds of young children, not only about the physical world but also about others and themselves. Children are curious about what others do and what their actions mean, what others know and what they ought to know, and even what others think of them and how to change their beliefs. Going beyond the idea that young children are like scientists who explore and learn about the external world, these results demonstrate how human social intelligence supports thinking, learning, and communicating about the self in a range of social contexts. These findings will be discussed in light of the recent interest in building socially intelligent machines that can interact and communicate with humans.

Dr Hyowon Gweon, Stanford University, USA

11:15 - 11:30 Discussion

Chair

Professor Leslie Pack Kaelbling, MIT, USA

12:30 - 13:00 Understanding computational dialogue understanding

Fifty years ago, in 1972, Terry Winograd published his seminal MIT dissertation as a book with the title Understanding Natural Language. His dialogue system SHRDLU was the first comprehensive AI system that modelled language understanding in an integrated way with all semiotic aspects of language: syntax, semantics and pragmatics combined with inference capabilities in a very simple model of the domain of discourse. 

Although from a cognitive modelling perspective various systems following this early paradigm covered quite sophisticated aspects of dialogue understanding, they had one huge disadvantage: they did not a scale to open domains.

Today statistical language models are becoming more capable than ever before and are helpful to realize scalable dialogue systems in open-domains. Professor Wahlster reviews Google’s recent LaMDA (Language Models for Dialogue Applications) system that was built by fine-tuning Transformer-based neural language models trained on 1.56 trillion words with up to 137 billion model parameters.

But these systems have another huge disadvantage: they behave like stochastic parrots, have no explicit representation of the communicative intent of their utterances and are not able to deal with complex turn-taking in multi-party dialogues.  

In this talk, Professor Wahlster will show that a human-like dialogue system must not only understand and represent the user’s input, but also its own output. They argue that only the combined muscle of data-driven deep learning and model-based deep understanding approaches will ultimately lead to human-like dialogue systems, since natural language understanding is AI-complete and must include inductive, deductive and abductive reasoning methods.

 

Professor Wolfgang Wahlster, German Research Center for Artificial Intelligence DFKI, Germany

13:00 - 13:15 Discussion
13:15 - 13:45 Professor Josh Tenenbaum, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology

Abstract will be available soon

13:45 - 14:00 Discussion
14:00 - 14:30 Tea
14:30 - 16:00 Panel discussion