Skip to content

Overview

Scientific discussion meeting organised by Professor Alan Bundy CBE FREng FRS, Professor Stephen Muggleton FREng, Professor Nick Chater, Professor Josh Tenenbaum, Professor Ulrike Hahn and Assistant Professor Ellie Pavlick.

In recent years there has been increasing excitement concerning the potential of Artificial Intelligence to transform human society. The meeting addressed present incompatibilities and similarities of human and machine reasoning and learning approaches, and involved presentations of relevant leading research in both Artificial Intelligence and Cognitive Science.

Recordings of the talks are now available below. An accompanying journal issue has been published in Philosophical Transactions of the Royal Society A.

Attending this event

This meeting has taken place.

Enquiries: contact the Scientific Programmes team


 

Organisers

Schedule


Chair


Chair

09:00-09:05
Welcome
09:05-09:30
How could we make a social robot? A virtual bargaining approach

Abstract

What is required to allow an artificial agent to engage in rich, human-like interactions with people? The author argues that this will require capturing the process by which humans continually create and renegotiate “bargains” with each other, about who should do what in a particular interaction, which actions are allowed or forbidden, and the momentary conventions governing communication, including language. But such bargains are far too numerous, and social interactions too rapid, to be stated explicitly. Moreover, the very process of communication presupposes innumerable momentary agreements, thus raising the threat of circularity. Thus, the improvised “social contracts” that govern people's interactions must be implicit. The author draws on the recent theory of virtual bargaining, according to which social partners mentally simulate a process of negotiation, to outline how these implicit agreements can be made, and note that viewpoint raises substantial theoretical and computational challenges. Nonetheless, the author suggests that these challenges must be met to create artificial intelligence systems that can work collaboratively alongside people, rather than being helpful special-purpose computational tools.

Speakers

09:30-09:45
Discussion
09:45-10:15
Representational change is integral to reasoning

Abstract

Artificial Intelligence emulations of reasoning have traditionally employed a representation of an environment consisting of a set of rules and facts expressed in a symbolic language. Goals are also expressed in this language and the rules and facts are used to solve them. For instance, a robot might use a representation of its environment to form plans to achieve goals.

The group argues that successful reasoning requires the representation to be fluid. Not only might the facts describing the environment need to change as the environment changes, but the rules might also change. Moreover, the language in which the rules and facts are expressed, may also have to change. For instance, a rule might acquire a new precondition when the old rule is discovered to hold only in limited circumstances. A concept in the language might also need to be divided into variants together with corresponding variants of the rules and facts that use it. 

Mathematics provides the toughest test of this thesis. Surely, its language, rules and facts remain stable during the proof of a theorem. Not so. George Pólya has written the classic guide to the art of problem-solving (Pólya, 1945). Imre Lakatos has written a fascinating rational reconstruction of the evolution of mathematical methodology (Lakatos, 1976). Although it was not their intention to do so, both these authors have implicitly provided profound evidence for our thesis that representations should be fluid. 

 

Speakers

10:15-10:30
Discussion
10:30-11:00
Coffee
11:00-11:30
Learning grounded and symbolic word representations with neural networks

Abstract

This talk will discuss the potential of neural network models to learn grounded and structured lexical concepts. The author will give an overview of two recent lines of work. The first asks whether neural networks trained to model the physical world can learn representations of concepts that align well to human language. The second asks whether neural networks conceptual representations are compositional and structured in the way we expect human representations to be.

Speakers

11:30-11:45
Discussion
11:45-12:15
DeepLog: reconciling machine learning and human bias

Abstract

Statistical machine learning typically achieves high accuracy models by employing tens of thousands of examples. By contrast, both children and
adult humans typically learn new concepts from either one or a small number of instances. The high data efficiency of human learning is not easily explained in terms of standard formal frameworks for machine learning, including Gold's Learning-in-the-Limit framework and Valiant's Probably-Approximately Correct (PAC) model. This presentation will explore ways in which this apparent disparity between human and machine learning can be reconciled by considering algorithms involving a preference for specificity combined with program minimality. The talk will show how this can be efficiently enacted using hierarchical search based on the use of certificates, logical matrices and pushdown automata to support the learning of compactly expressed maximal efficiency algorithms. Early results of a new system called DeepLog indicate that such approaches can support efficient top-down identification of relatively complex logic programs from small numbers of examples.

 

Speakers

12:15-12:30
Discussion

Chair


Chair

13:30-14:00
Argument and explanation

Abstract

The talk will bring together two closely related, but distinct, notions: argument and explanation. First, it will clarify their relationship and  then provide an overview of relevant research on these notions, drawn both from the cognitive science and the AI literatures. This will then be used to identify key directions for future research, indicating areas where bringing together cognitive science and AI perspectives would be mutually beneficial.

Speakers

14:00-14:15
Discussion
14:15-14:45
Language use as social reasoning

Abstract

Our use of language goes far beyond a simple decoding of the literal meaning of the speech signal. Philosophers have argued instead that language use involves complex reasoning about the knowledge and intentions of interlocutors. Yet language use is fast and effortless. How can language depend on complex reasoning while being so easy? In this talk Associate Professor Goodman will explore this question using both structured and autoregressive probabilistic models, considering cases including implicature and metaphor.

Speakers

14:45-15:00
Discussion
15:00-15:30
Tea
15:30-16:00
Short talks

Chair


Chair

09:00-09:30
Doing for our robots what nature did for us

Abstract

We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in 'the factory' (that is, at engineering time) and in 'the wild' (that is, when the robot is delivered to a customer). Professor Pack Kaebling will share some general thoughts about the strategies for robot design and then talk in detail about some work they have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

Speakers

09:30-09:45
Discussion
09:45-10:15
Language learning in humans and machines

Abstract

Why consider connections between language acquisition in humans and machines? The author argues that there are at least two reasons. On the one hand, developments in machine learning can potentially provide hypotheses or insights (and ideally, testable predictions) regarding human language acquisition. On the other hand, data and methods from behavioural experiments can be used to better understand the current limitations of engineered systems. To illustrate these two directions, the author discusses examples from her own work, focusing on early speech perception and morphological generalization. 

Speakers

10:15-10:30
Discussion
10:30-11:00
Coffee
11:00-11:30
Computational models of emotion prediction

Speakers

11:30-11:45
Discussion
11:45-12:15
Towards socially intelligent machines: thinking, learning, and communicating about the self

Abstract

Humans are not the only species that learn from and interact with others, but only humans actively involve their conspecifics in the process of acquiring abstract knowledge and navigating social norms that regulate, evaluate, and shape their own and others’ behaviours. What makes human social intelligence so distinctive, powerful, and smart?  In this talk, the author will present a series of studies that reveal the remarkably curious minds of young children, not only about the physical world but also about others and themselves. Children are curious about what others do and what their actions mean, what others know and what they ought to know, and even what others think of them and how to change their beliefs. Going beyond the idea that young children are like scientists who explore and learn about the external world, these results demonstrate how human social intelligence supports thinking, learning, and communicating about the self in a range of social contexts. These findings will be discussed in light of the recent interest in building socially intelligent machines that can interact and communicate with humans.

Speakers

12:15-12:30
Discussion

Chair

13:30-14:00
Understanding computational dialogue understanding

Abstract

Fifty years ago, in 1972, Terry Winograd published his seminal MIT dissertation as a book with the title Understanding Natural Language. His dialogue system SHRDLU was the first comprehensive AI system that modelled language understanding in an integrated way with all semiotic aspects of language: syntax, semantics and pragmatics combined with inference capabilities in a very simple model of the domain of discourse. 

Although from a cognitive modelling perspective various systems following this early paradigm covered quite sophisticated aspects of dialogue understanding, they had one huge disadvantage: they did not a scale to open domains.

Today statistical language models are becoming more capable than ever before and are helpful to realize scalable dialogue systems in open-domains. Professor Wahlster reviews Google’s recent LaMDA (Language Models for Dialogue Applications) system that was built by fine-tuning Transformer-based neural language models trained on 1.56 trillion words with up to 137 billion model parameters.

But these systems have another huge disadvantage: they behave like stochastic parrots, have no explicit representation of the communicative intent of their utterances and are not able to deal with complex turn-taking in multi-party dialogues.  

In this talk, Professor Wahlster will show that a human-like dialogue system must not only understand and represent the user’s input, but also its own output. They argue that only the combined muscle of data-driven deep learning and model-based deep understanding approaches will ultimately lead to human-like dialogue systems, since natural language understanding is AI-complete and must include inductive, deductive and abductive reasoning methods.

 

Speakers

14:00-14:15
Discussion
14:15-14:45
Professor Josh Tenenbaum, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology

Abstract

Abstract will be available soon

14:45-15:00
Discussion
15:00-15:30
Tea
15:30-17:00
Panel discussion