This page is archived

Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.

Face2face: advancing the science of social interaction

04 - 05 April 2022 08:00 - 16:00

Scientific discussion meeting organised by Professor Antonia Hamilton and Dr Judith Holler.

New technologies and new theories are emerging to enable a scientific approach to studying face to face social interaction. This meeting brought together research in neuroscience, psychology, computer science, linguistics, anthropology and evolutional biology to share different ways of studying face-to-face interaction, discuss common challenges and possible solutions, and thus to define the future of this growing field.

The schedule of talks and speaker biographies are available below. Speaker abstracts are also available below. An accompanying journal issue has been published in Philosophical Transactions of the Royal Society B

Attending this event

This meeting has taken place.

Enquiries: contact the Scientific Programmes team

Organisers

  • Professor Antonia Hamilton, UCL, UK

    Professor Hamilton’s research has ranged across domains, from computational motor control to neurodevelopmental disorders and cognitive neuroscience.  Her current focus is on understanding the brain and cognitive mechanisms of human nonverbal social interactions such as imitation and gaze in people with and without autism. This work uses new technologies to study real world social interaction, including virtual reality, motion capture and fNIRS as well as more traditional cognitive approaches. By using these innovative methods together with a strong focus on fundamental theories, Hamilton works to uncover basic mechanisms of human social behaviour and the differences in social behaviour in autism.

  • Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The Netherlands

    Dr Judith Holler is Associate Professor at Radboud University Nijmegen and leads the research group 'Communication in Social Interaction (CoSI)' (Donders Institute for Brain, Cognition and Behaviour and Max Planck Institute for Psycholinguistics). With her group, she investigates human language in face-to-face social interaction. Her focus is on language as a multimodal, audio-visual phenomenon, and specifically on the semantic and pragmatic contributions of visual bodily signals (hands, head, gaze and face) to interlocutors’ language use and comprehension in dialogue. Dr Holler’s research focus on situated psycholinguistics is based on an interdisciplinary approach combining the micro-analysis of multimodal language, CA-informed corpus analyses of conversational interaction, and methods from psycholinguistics and neuroscience. Dr Holler’s research has been funded by the Economic and Social Research Council, the Leverhulme Trust, the British Academy, Parkinson’s UK, Marie Skłodowska-Curie Actions, and she has recently been awarded a prestigious European Research Council consolidator grant to pursue her research.

Schedule

Chair

Professor Antonia Hamilton, UCL, UK

08:00 - 08:05 Introduction
08:05 - 08:30 Learning to interact is learning to understand: computational approaches to model face to face interaction

Enabling human-like communication has been a driving force as well as a linchpin for AI systems ever since. Recent data-driven approaches in fields like conversational AI, computational linguistics, or social behaviour processing have yielded models that can process and generate impressive instances of communicative surface behaviour. However, today's systems still suffer from clear shortcomings when it comes to being able to engage in a fluent, dynamic face to face interaction. Professor Kopp will discuss the current state of the art and point out principles and approaches that need to (and can) be used to build embodied conversational agents that are able to create and maintain a 'socially interactive and intelligent presence' in face to face interaction. This will include establishing multiple interaction loops, adapting dynamically to the evolving interaction context, and combining communicative behavior processing with mental state attribution and dialogue models. 

Professor Stefan Kopp, Bielefeld University, Germany

08:30 - 08:45 Discussion
08:45 - 09:15 How can face to face studies improve understanding of autism?

Experiencing social interaction and social communication difficulties is core to a diagnosis on the autism spectrum. Understanding how and why social interaction differences occur between autistic and non-autistic people will facilitate understanding of how difficulties can be reduced. In order to create good models of social interaction and communication processes it is important to study behaviour in real-world contexts. Conducting well designed, rigorous face-to-face studies is practically challenging in many ways but if we fail to do so, our understanding will be incomplete. In this talk, Dr Freeth will consider evidence from a range of different paradigms that aim to assess key processes and constructs involved in face-to-face interactions including consideration of how these can be isolated and investigated. These include 1. the perception of social presence; 2. the perception of agency as social or non-social; 3. the experience of a live vs pre-recorded social encounter. Evidence will be drawn from paradigms that involve structured interactions measuring behavioural responses and tracking of eye movements, using both desk-mounted and mobile eye-trackers. Overall, the evidence presented will demonstrate the potential that face-to-face interaction studies afford in furthering our understanding of the processes underlying social interaction and social communication.

bty

Dr Megan Freeth, The University of Sheffield, UK

09:15 - 09:30 Discussion
09:30 - 10:00 Coffee
10:00 - 10:30 Neural sociometrics: measuring parent-infant neuro-social dynamics to index early mental health and brain development

During early life, healthy neurodevelopment depends on warm, responsive and closely-coordinated social interactions between infants and caregivers. These rich multidimensional sensory experiences act through multiple pathways to orchestrate healthy maturation of the neonatal brain, mind and body. Conversely, adverse early life experiences (including abuse or neglect) seed vulnerabilities for poor cognition and emotional instability throughout the lifespan. 

Despite the pivotal role played by social interaction in early development, we still lack medical models and precision tools that can accurately assess a child’s social interactive capacities (their ability to engage in and respond to social input) and measure the health of their social interactions. Similarly, although psychological tools exist to measure children’s cognitive capacities – including intelligence, attention, working memory, and inhibition skills – these tests cannot be used in the first year of life with young infants. Therefore, a needs gap exists in the early assessment and detection of abnormalities in infant psychosocial health and consequent impact on neurocognitive development. We currently miss many, if not most, early warning signs of suboptimal psychosocial development, along with valuable opportunities for prophylactic intervention, before a child’s first birthday.

Here, Associate Professor Leong will discuss neural sociometrics – real-time multi-sensor high-dimensional imaging of adult-infant social interactive behaviour and neurophysiology – as a possible tool and framework for precision measurement of early mental health and neurodevelopment. Early risk identification and mitigation, paired with precision therapeutics, could fundamentally alter a child’s development trajectory toward lifelong mental wellbeing and productivity.

Associate Professor Victoria Leong, Nanyang Technological University, Singapore and University of Cambridge, UK

10:30 - 10:45 Discussion
10:45 - 11:15 Vocal learning, chorusing seal pups, and the origins of human rhythm

Human music and speech are peculiar behaviours from a biological perspective: although extremely common in humans, at first sight they do not seem to confer any direct evolutionary advantage. In particular, many hypotheses have been proposed for the origins of rhythm, often in connection with vocal learning, a precursor to speech. Because music and speech do not fossilize, and lacking a time machine, the comparative approach provides a powerful tool to tap into human cognitive history. Notably, behaviours that are homologous or analogous to human rhythm and speech can be found across a few animal species and developmental stages. Hence, investigating rhythm across species is not only interesting in itself, but it is crucial to unveil protomusical and protolinguistic behaviours present in early hominids. Here Dr Ravignani suggests how three strands of research – partly neglected until now – can be particularly fruitful in shedding light on the evolution of rhythm and vocal learning. He will present rhythm experiments in marine mammals, primates, and other species, suggesting that rhythm research in non-human animals can also benefit from ecologically-relevant setups, combining strengths and knowledge from cognitive neuroscience and behavioural ecology. Second, he will discuss the interplay between vocal anatomy, (social) learning, and vocal development in harbor seal pups, arguing for their importance as model species for human speech. Finally, he will present computational modeling work on rhythmic and interactive signaling across species. These results suggest that, while many species may share one or more building blocks of speech and music, the ‘full package’ may be uniquely human.

Dr Andrea Ravignani, Max Planck Institute for Psycholinguistics, The Netherlands

11:15 - 11:30 Discussion

Chair

Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The Netherlands

12:30 - 13:00 Neural systems that underlie face-to-face interactions: what does the human face tell the human brain?

Although emotional contagion is a primitive and foundational feature of social interactions, the underlying neural mechanisms are not well understood. Here we apply a dyadic approach to investigate face-based emotional contagion using hyperscanning and functional near infrared spectroscopy. One participant, 'Video-Watcher', watched emotionally provocative silent videos and the other partner, 'Face-Watcher', watched the face of the Video-Watcher. The library of videos included 'neutrals' featuring landscapes, 'adorables', featuring animal antics, and 'creepy crawlies' featuring spiders, worms and states of decay. Ratings of the positive and negative emotional experience were acquired simultaneously for both participants. The correlation of emotional ratings between the 'Video and Face-Watchers' (r = 0.60) confirmed that the emotion was transmitted by facial expressions and taken as confirmation of emotional contagion.  Facial Action Units of the 'Video Watcher’s' face were taken as the stimuli for the 'Face- Watcher’s' brain. Neural responses included a complex of right TPJ regions consisting of supramarginal gyrus, angular gyrus, and superior temporal gyrus (p < 0.05, FDR corrected peak voxel) consistent with a face-emotion-sensing function. A dyadic model of emotion contagion and face sensing is suggested where human brains and faces are tuned to send and receive emotional states.

Professor Joy Hirsch, Yale University, USA and UCL, UK

13:00 - 13:15 Discussion
13:15 - 13:45 Face to face interaction in chimpanzee and human mother-infant-dyads

In WEIRD (Western, educated, industrialized, rich, democratic) societies, face-to-face interactions are an important component of mother-infant interactions. While facing each other, they engage in a variety of non-verbal means of communication, including facial expressions, such as smiles, and eye contact. It has been suggested that face-to-face interactions are unique to humans, as they are a form of emotional communication indicative of the long-lasting bond between mothers and their infants.

However, other great apes, including the closest relatives of humans, the chimpanzees, also form close and long-lasting relationships with their offspring, but little is known about the role of face-to-face contact in mother-infant interactions. In terms of quantitative measures, research has shown that gazing bouts in chimpanzees are shorter than in human mother-infant dyads. However, less is known about the quality of such face-to-face contact, as it remains an open question what chimpanzee mothers or their infants do while engaging in such interactions. 

Therefore, the group studied face-to-face contact between mothers and their infants in captive chimpanzees and humans from a Western society, to investigate both qualitative and quantitative aspects of their face-to-face interactions. They predicted that 1) human mothers spend more time engaging in face-to-face contact with their infants, and 2) both species vary with regard to the types of behaviours used during such gazing bouts. They used focal observations to video-record spontaneous interactions of ten human and 8 chimpanzee mothers and their 6-months-old infants, and coded facial expressions, facial touches, and the duration of face-to-face contact. If feasible, eye contact was also coded. As predicted, human dyads spend significantly more time in face-to-face interactions than chimpanzees. Species also differed with regard to the behaviours shown during face-to-face contact: human mothers frequently engaged in direct eye contact and to some extent, also mirrored facial expressions of their infant, while chimpanzee mothers often touched the face of their infants, eg, during grooming bouts. Thus, these findings indicate that during face-to-face contact, human mothers use different behaviors to communicate with their infants, while in chimpanzees face-to-face contact seems to be rather a by-product of the mother’s bodily actions on the infant, rather than a communicative situation. 

Professor Katja Liebal, Leipzig University, Germany

13:45 - 14:00 Discussion
14:00 - 14:30 Tea
14:30 - 15:00 Conversation as the platform where minds meet

The game-changer that afforded human culture is widely considered to be language. But turning thoughts into vocalizations was only part of the story. The real innovation was the development of a common language – a sounds-to-meanings map shared across people that enabled one idea to leap from one mind to the next. Conversation provided a platform for minds to meet, allowing information to leap from one brain to the next. But how does conversation work?

We know very little about conversation because the fields of psychology and neuroscience have focused near-exclusively on the individual. In part this was because interacting minds create rich, complex data that were challenging to capture and analyse. However, recent technological advances have made the study of conversation newly tractable.

Here, Professor Wheatley will present research demonstrating how face to face conversation aligns minds by increasing neural synchrony and how neural synchrony, in turn, changes conversation. She will also discuss evidence for why breaking interpersonal synchrony, from time to time, may be important to striking the right balance between shared and independent modes of thought.

Professor Thalia Wheatley, Dartmouth College, USA

15:00 - 15:15 Discussion
15:15 - 15:45 Quirky conversations: how patients with schizophrenia do dialogue differently

Conversation is a collaborative process where speakers and listeners produce information together, continuously coordinating to incrementally co-construct the evolving content. Smooth turn exchange is achieved through tight coordination of interlocutors’ verbal and non-verbal behaviour and becomes problematic when this deviates from expectations. 

Patients with a diagnosis of schizophrenia are one of the most socially excluded in society. It is well documented that they have problems with language and social cognitive skills, including with self-monitoring and turn-taking, yet little research has investigated how these impact interaction. Interactions involving patients offer an opportunity to observe the strategies that people employ when interaction is problematic, and shed light on how ‘normal’ interactions are managed.

Using data from a corpus of triadic conversations containing 20 dialogues involving one patient with a diagnosis of schizophrenia and two healthy controls and 20 dialogues involving three healthy participants, Dr Howes will show that dialogues involving a patient differ from controls in terms of turn-taking, disfluencies, gesture and repair. Furthermore, the presence of the patient influences the behaviour of the healthy controls they interact with. The data supports the idea that disfluencies are communicative solutions, not problems.

This unique data demonstrates that not only are there communication difficulties in schizophrenia but they also impact on social interactions more broadly, thus providing new insights into the social deficits of this complex disorder. 

Dr Christine Howes, University of Gothenburg, Sweden

15:45 - 16:00 Discussion

Chair

Professor Antonia Hamilton, UCL, UK

08:00 - 08:30 Using social agents to study interaction without sacrificing experimental control

Face-to-face social interaction is akin to a dance. When it works well, participants seem tightly enmeshed, rapidly detecting and responding to each other’s words, movements and expressions. Studying such interactivity often involves a compromise between experimental rigor and ecological validity. Traditional scientific methods typically require breaking or altering natural interactivity in order to establish causality. For example, influential research on the role of emotional expressions in negotiations comes from people interacting with scripted computer agents where, unbeknownst to participants, their attempts at interaction and reciprocity are ignored by their computer partner. Less extreme approaches at experimental control are critiqued as purely correlation or lacking scientific rigor. In this talk Professor Gratch will discuss how agent technology offers a middle ground where candidate interactional theories can be learned automatically from actual interactions then incorporated into the behavior of 'virtual confederates'. He will illustrate this approach to the study of rapport and emotions in negotiation. 

Professor Jonathan Gratch, University of Southern California, USA

08:30 - 08:45 Discussion
08:45 - 09:15 On the multimodal nature of turn-taking: the interplay of talk, gaze and gesture in the coordination of turn transitions

Turn-taking is a fundamental and universal feature of conversation. A central question in research on turn-taking is how speakers recognize the points of possible turn completion where transitions occur. Over the last 50 years, a cumulative body of research in the field of conversation analysis (CA) has investigated turn-taking through naturalistic observation and qualitative description, identifying the precise linguistic cues that signal the relevance of transition. In this model, visible bodily actions play only a minimal role. Quantitative research outside the CA tradition has, however, argued that visual cues are in fact central to the organization of turn-taking, but these studies have tended to employ relatively coarse measures that compromise their informativeness. In this talk, Dr Kendrick will reconcile these disparate strands of research and present new evidence for the role that gaze and gesture play in the organization of turn-taking. The data come from a corpus of dyadic casual conversations in which participants wore eye-tracking glasses for direct measurement of their gaze while they were also recorded by multiple cameras for a fine-grained analysis of their gestures. Dr Kendrick will combine both quantitative and qualitative conversation-analytic methods to show how the direction of a speaker’s gaze and the temporal organization of their gestures influence the relevance of transition between speakers on a turn-by-turn basis. The findings, Dr Kendrick will argue, demonstrate the fundamentally multimodal nature of the human turn-taking system, a basic fact which has far-reaching implications for theories of language processing and the evolution of human communication.

Earl grey

Dr Kobin H Kendrick, University of York, UK

09:15 - 09:30 Discussion
09:30 - 10:00 Coffee
10:00 - 10:30 Getting attuned: social synchrony in early human development

Caregiver-infant interactions are characterized by interpersonal rhythms at different timescales, from nursery rhymes and interactive games to daily routines. These rhythms make the social environment more predictable for young children and facilitate interpersonal biobehavioral synchrony with their caregivers. In adults, the brain rhythms of interaction partners entrain to speech rhythms, supporting mutual comprehension and communication. Professor Hoehl will present recent evidence that this is also the case in the infant brain, especially when babies are addressed directly through infant-directed speech in naturalistic interactions. EEG allows us to disentangle acoustic properties of infant-directed speech that support infant neural tracking, specifically prosody. Through using simultaneous measures of neural and physiological rhythms, eg, dual-fNIRS and dual-ECG, from caregiver and infant during live face-to-face interactions, we can further deepen our understanding of early interactional dynamics and their reciprocal nature. Professor Hoehl will present recent research identifying factors supporting the establishment of caregiver-infant neural synchrony, such as affectionate touch and vocal turn-taking. She will further discuss the functional links and dissociations between caregiver-infant synchrony on the neural and physiological levels. Both aspects of social synchrony are enhanced in a face-to-face interaction compared to a mutual passive viewing condition. Yet, in contrast to neural synchrony, physiological synchrony between caregiver and infant is related to infant affect. She will outline potential implications of this work and point out important future directions.

Professor Stefanie Hoehl, University of Vienna

10:30 - 10:45 Discussion
10:45 - 11:15 Towards a neurocognitive understanding of language production in social interaction

The cognitive processes underlying language production are typically investigated in experimental settings testing one speaker at a time. Yet, we typically speak in social interactions coordinating speaking, and listening, with two or more individuals. In this talk, Dr Anna Kuhlen will focus on how a speaker’s lexical access is shaped by social interaction. She will present experimental approaches that scale up classic picture-naming tasks typically used in speech production research to joint action settings in which two speakers take turns speaking. In these settings we can observe that a speaker’s latencies in naming pictures, a proxy for the ease of lexical access, are modulated not only by the semantic context generated by the speaker’s own prior utterances, but also by the semantic context generated by the partner’s utterances. On the one hand, this can lead to semantic interference and impede own speech production (Kuhlen and Abdel Rahman, 2017). On the other hand, a partner’s utterances can also facilitate own speech production when joint picture naming is embedded in a setting in which picture naming becomes part of a meaningful communicative game (Kuhlen and Abdel Rahman, 2021). These findings demonstrate that processes of speech production are shaped by the social interaction as they become part of a joint action. Moreover, observations made in single-subject settings might not transfer to social and communicative settings, highlighting the importance of investigating language production in settings in which it typically occurs, namely in social interaction.

Dr Anna K Kuhlen, Humboldt University of Berlin, Germany

11:15 - 11:30 Discussion

Chair

Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The Netherlands

12:30 - 13:00 The role of emotional expressions during social interactions in humans and great apes

Social species’ capacity to express, recognize and share emotions enables them to navigate their social worlds and forms a core component of what it means to be socially competent and healthy. In order to evaluate another’s trustworthiness, they rely on various indicators of a safe interaction including emotional expressions. The focus of most emotion research has been on explicit, isolated facial expressions. However, during interactions in real life, expressions can be more subtle, mixed and ambiguous and go beyond the facial action units (eg blush, tears, pupil dilation). Further, the face is not perceived in isolation, but in the context of the body. In this talk, Dr Kret will present a series of studies in humans and great apes ranging from computer tasks to observations in the natural environment, giving insight in the role of emotional expressions, in all their complexity, in social interactions.

Professor Mariska Kret, Leiden University, The Netherlands

13:00 - 13:15 Discussion
13:15 - 13:45 How interaction shapes communication systems: insights from birdsong and artificial language learning in humans

Communication systems are constantly shaped by their users. This is not only true to human language but also to other socially learned systems like birdsong. The processes of individual learning, social interaction and cross-generational cultural transmission mediate the relationship between individual cognition and the structural features of communication systems. Dr Feher studies these processes in a comparative framework using atypical songs and languages. 

Like children, juvenile songbirds learn to vocalise in species-specific ways from their parents. In the absence of social and acoustic input, they improvise an isolate song. These songs are highly variable across individuals, but they contain similar features not normally seen in wild-type songs. When acquired by juvenile birds, isolate songs are modified in ways that reflect the innate biases of learners for wild-type song features. In humans, an atypical linguistic feature commonly used to study language evolution and language change is unpredictable variation. While variation is universally present in natural languages, unpredictable variation is extremely rare. Dr Feher uses artificial languages that exhibit this feature to observe people’s tendency to eliminate the unpredictability in their language. Learning, interaction and transmission all amplify learners’ biases and drive the emergence of species-typical features (ie wild-type song features and conditioned variation), but they favour different aspects of the communication systems and sometimes exert opposing forces on the way these features evolve. Dr Feher will discuss a number of experiments in songbirds and humans that have explored the individual and combined effects of interaction and transmission.   

Dr Olga Feher, University of Warwick, UK

13:45 - 14:00 Discussion
14:00 - 14:30 Tea
14:30 - 15:00 Gesture, spatial cognition and the evolution of language

Recent work in primatology has established that, with the exception of humans, all  of the Hominidae, the great apes, use a gestural form of communication as their primary interactional medium. Normal phylogenetic reasoning therefore implies that our own common ancestor with the two chimpanzee species was a gesturer, and this is inline with paleontological evidence from early Homo erectus. Today, although our primary communication channel is vocal, humans gesture frequently and spontaneously while speaking, and the earliest form of communication in infancy is gestural. All this suggests that an early hominin gestural system may have provided a base for vocal language development. Now, gesture is a spatial mode of communication largely about space, and this raises questions about the role of spatial cognition in language. Recent work in neuroscience has shown that specialized cells in the hippocampus play a central role in spatial cognition, and that humans have partly repurposed the hippocampus for language and memory. Moreover, it has long been noted that spatial concepts underlie much linguistic structure. Putting this altogether suggests that gesture may have played a crucial role in language evolution by importing spatial cognition into the heart of grammar.

Professor Stephen C Levinson FBA, Max Planck Institute for Psycholinguistics, Nijemegen, The Netherlands

15:00 - 15:15 Discussion
15:15 - 16:00 Panel discussion

Professor Antonia Hamilton, UCL, UK

Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The Netherlands