Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.
Face2face: advancing the science of social interaction
Scientific discussion meeting organised by Professor Antonia Hamilton and Dr Judith Holler.
New technologies and new theories are emerging to enable a scientific approach to studying face to face social interaction. This meeting brought together research in neuroscience, psychology, computer science, linguistics, anthropology and evolutional biology to share different ways of studying face-to-face interaction, discuss common challenges and possible solutions, and thus to define the future of this growing field.
The schedule of talks and speaker biographies are available below. Speaker abstracts are also available below. An accompanying journal issue has been published in Philosophical Transactions of the Royal Society B.
Attending this event
This meeting has taken place.
Enquiries: contact the Scientific Programmes team
Organisers
Schedule
Chair
Professor Antonia Hamilton, UCL, UK
Professor Antonia Hamilton, UCL, UK
Professor Hamilton’s research has ranged across domains, from computational motor control to neurodevelopmental disorders and cognitive neuroscience. Her current focus is on understanding the brain and cognitive mechanisms of human nonverbal social interactions such as imitation and gaze in people with and without autism. This work uses new technologies to study real world social interaction, including virtual reality, motion capture and fNIRS as well as more traditional cognitive approaches. By using these innovative methods together with a strong focus on fundamental theories, Hamilton works to uncover basic mechanisms of human social behaviour and the differences in social behaviour in autism.
08:00 - 08:05 | Introduction |
---|---|
08:05 - 08:30 |
Learning to interact is learning to understand: computational approaches to model face to face interaction
Enabling human-like communication has been a driving force as well as a linchpin for AI systems ever since. Recent data-driven approaches in fields like conversational AI, computational linguistics, or social behaviour processing have yielded models that can process and generate impressive instances of communicative surface behaviour. However, today's systems still suffer from clear shortcomings when it comes to being able to engage in a fluent, dynamic face to face interaction. Professor Kopp will discuss the current state of the art and point out principles and approaches that need to (and can) be used to build embodied conversational agents that are able to create and maintain a 'socially interactive and intelligent presence' in face to face interaction. This will include establishing multiple interaction loops, adapting dynamically to the evolving interaction context, and combining communicative behavior processing with mental state attribution and dialogue models. Professor Stefan Kopp, Bielefeld University, Germany
Professor Stefan Kopp, Bielefeld University, GermanyStefan Kopp is Professor of Computer Science and head of the Social Cognitive Systems Group at Bielefeld University, Germany. He obtained his PhD in AI for work on generating fluent multimodal behaviour of artificial agents. After a postdoc stay at Northwestern University (IL) and a research fellowship at the Center for Interdisciplinary Research ZiF (Bielefeld), he has been deputy coordinator of CRC 673 'Alignment in Communication', principal investigator at the Center of Excellence 'Cognitive Interaction Technology' (CITEC), and chairman of the German Cognitive Science Society (GK). Stefan is internationally renowned and awarded for his interdisciplinary research at the intersection of human communication, embodied-cognitive models of social intelligence, and bootstrapping it with conversational agents or social robots. He has been involved as principal investigator in leading research projects on human/child-robot interaction, multimodal communication, cognitive assistants, or explainable systems. His current research interests centre around the interplay and coordinative dynamics of social cognition and communication in human-agent/robot interaction. |
08:30 - 08:45 | Discussion |
08:45 - 09:15 |
How can face to face studies improve understanding of autism?
Experiencing social interaction and social communication difficulties is core to a diagnosis on the autism spectrum. Understanding how and why social interaction differences occur between autistic and non-autistic people will facilitate understanding of how difficulties can be reduced. In order to create good models of social interaction and communication processes it is important to study behaviour in real-world contexts. Conducting well designed, rigorous face-to-face studies is practically challenging in many ways but if we fail to do so, our understanding will be incomplete. In this talk, Dr Freeth will consider evidence from a range of different paradigms that aim to assess key processes and constructs involved in face-to-face interactions including consideration of how these can be isolated and investigated. These include 1. the perception of social presence; 2. the perception of agency as social or non-social; 3. the experience of a live vs pre-recorded social encounter. Evidence will be drawn from paradigms that involve structured interactions measuring behavioural responses and tracking of eye movements, using both desk-mounted and mobile eye-trackers. Overall, the evidence presented will demonstrate the potential that face-to-face interaction studies afford in furthering our understanding of the processes underlying social interaction and social communication. Dr Megan Freeth, The University of Sheffield, UK
Dr Megan Freeth, The University of Sheffield, UKDr Megan Freeth is Senior Lecturer in Psychology and Departmental Director of Research and Innovation at The University of Sheffield. She is director of the Sheffield Autism Research Lab. The scope of Dr Freeth's research is broad and includes using eye-tracking to understand mechanisms of social attention, neuroimaging and behavioural studies aimed at gaining insight into neural and cognitive divergence and applied studies aimed at improving understanding of the lived experience of autism and genetic syndromes. Her work has an overarching goal of having a positive impact on the lives of autistic individuals and those with genetic syndromes via advancements in research. |
09:15 - 09:30 | Discussion |
09:30 - 10:00 | Coffee |
10:00 - 10:30 |
Neural sociometrics: measuring parent-infant neuro-social dynamics to index early mental health and brain development
During early life, healthy neurodevelopment depends on warm, responsive and closely-coordinated social interactions between infants and caregivers. These rich multidimensional sensory experiences act through multiple pathways to orchestrate healthy maturation of the neonatal brain, mind and body. Conversely, adverse early life experiences (including abuse or neglect) seed vulnerabilities for poor cognition and emotional instability throughout the lifespan. Despite the pivotal role played by social interaction in early development, we still lack medical models and precision tools that can accurately assess a child’s social interactive capacities (their ability to engage in and respond to social input) and measure the health of their social interactions. Similarly, although psychological tools exist to measure children’s cognitive capacities – including intelligence, attention, working memory, and inhibition skills – these tests cannot be used in the first year of life with young infants. Therefore, a needs gap exists in the early assessment and detection of abnormalities in infant psychosocial health and consequent impact on neurocognitive development. We currently miss many, if not most, early warning signs of suboptimal psychosocial development, along with valuable opportunities for prophylactic intervention, before a child’s first birthday. Here, Associate Professor Leong will discuss neural sociometrics – real-time multi-sensor high-dimensional imaging of adult-infant social interactive behaviour and neurophysiology – as a possible tool and framework for precision measurement of early mental health and neurodevelopment. Early risk identification and mitigation, paired with precision therapeutics, could fundamentally alter a child’s development trajectory toward lifelong mental wellbeing and productivity. Associate Professor Victoria Leong, Nanyang Technological University, Singapore and University of Cambridge, UK
Associate Professor Victoria Leong, Nanyang Technological University, Singapore and University of Cambridge, UKVictoria Leong is a developmental cognitive neuroscientist who has pioneered the use of dyadic-EEG to study parent-infant neural dynamics during social interaction, particularly in the context of play, communication and social learning. She is Associate Professor of Psychology and Medicine at Nanyang Technological University, Singapore and Honorary Senior Fellow with the Department of Pediatrics, Cambridge University, UK. She is also Deputy Director of the Cambridge-NTU Centre for Lifelong Individualised Learning which aims to develop neuropersonalised training programmes for flexible lifespan learning. Vicky is a recipient of the Federation of Associations in Behavioral and Brain Sciences Early Career Impact Award (2022), MOE Social Sciences and Humanities Research Fellowship (Singapore, 2021), Parke Davis Exchange Fellowship (Harvard, 2015), Sutasoma Junior Research Fellowship (Cambridge, 2013-15), Cognitive Science Society Glushko Dissertation Prize (2014) and holds awards from the Wellcome Trust, British Academy, UK Economic & Social Research Council, and Rosetrees Medical Trust. |
10:30 - 10:45 | Discussion |
10:45 - 11:15 |
Vocal learning, chorusing seal pups, and the origins of human rhythm
Human music and speech are peculiar behaviours from a biological perspective: although extremely common in humans, at first sight they do not seem to confer any direct evolutionary advantage. In particular, many hypotheses have been proposed for the origins of rhythm, often in connection with vocal learning, a precursor to speech. Because music and speech do not fossilize, and lacking a time machine, the comparative approach provides a powerful tool to tap into human cognitive history. Notably, behaviours that are homologous or analogous to human rhythm and speech can be found across a few animal species and developmental stages. Hence, investigating rhythm across species is not only interesting in itself, but it is crucial to unveil protomusical and protolinguistic behaviours present in early hominids. Here Dr Ravignani suggests how three strands of research – partly neglected until now – can be particularly fruitful in shedding light on the evolution of rhythm and vocal learning. He will present rhythm experiments in marine mammals, primates, and other species, suggesting that rhythm research in non-human animals can also benefit from ecologically-relevant setups, combining strengths and knowledge from cognitive neuroscience and behavioural ecology. Second, he will discuss the interplay between vocal anatomy, (social) learning, and vocal development in harbor seal pups, arguing for their importance as model species for human speech. Finally, he will present computational modeling work on rhythmic and interactive signaling across species. These results suggest that, while many species may share one or more building blocks of speech and music, the ‘full package’ may be uniquely human. Dr Andrea Ravignani, Max Planck Institute for Psycholinguistics, The Netherlands
Dr Andrea Ravignani, Max Planck Institute for Psycholinguistics, The NetherlandsAndrea is an Independent Group Leader at the Max Planck Institute for Psycholinguistics, where he leads the Comparative Bioacoustics Research Group. He is also Associate Professor at the Center for Music in the Brain, Aarhus University, Denmark. Andrea has studied, researched and worked in several areas, including mathematics, biology, speech sciences, musicology, computer science and cognitive psychology – this multidisciplinarity is mirrored in his research team. Andrea’s current research group at the MPI is highly interdisciplinary, featuring 10 scientists from many disciplines, including cognitive neuroscience, ethology, experimental psychology, linguistics, communication sciences, computer science, AI, bioacoustics, primatology and marine mammalogy. Andrea’s research, and that of his group, tackles some fundamental questions: Why do we speak? And why are humans such musical animals? Could our abilities to speak and to make music be connected, sharing a common evolutionary history? Andrea investigates the evolutionary and biological bases of music cognition and flexible vocal sound production, and the role they played in the origins of music and speech in our species. His team performs sound recordings and behavioural non-invasive experiments in non-human animals (mostly seals), as a comparative effort to understand the evolutionary history of human capacities. The team complements animal research with human testing, neurobiological evidence, mathematical models, and agent-based simulations. Andrea has recently been awarded an ERC Starting Grant to investigate the origins of human rhythm using a multi-species and multi-method approach. He firmly believes in and supports kindness in science. |
11:15 - 11:30 | Discussion |
Chair
Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The Netherlands
Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The Netherlands
Dr Judith Holler is Associate Professor at Radboud University Nijmegen and leads the research group 'Communication in Social Interaction (CoSI)' (Donders Institute for Brain, Cognition and Behaviour and Max Planck Institute for Psycholinguistics). With her group, she investigates human language in face-to-face social interaction. Her focus is on language as a multimodal, audio-visual phenomenon, and specifically on the semantic and pragmatic contributions of visual bodily signals (hands, head, gaze and face) to interlocutors’ language use and comprehension in dialogue. Dr Holler’s research focus on situated psycholinguistics is based on an interdisciplinary approach combining the micro-analysis of multimodal language, CA-informed corpus analyses of conversational interaction, and methods from psycholinguistics and neuroscience. Dr Holler’s research has been funded by the Economic and Social Research Council, the Leverhulme Trust, the British Academy, Parkinson’s UK, Marie Skłodowska-Curie Actions, and she has recently been awarded a prestigious European Research Council consolidator grant to pursue her research.
12:30 - 13:00 |
Neural systems that underlie face-to-face interactions: what does the human face tell the human brain?
Although emotional contagion is a primitive and foundational feature of social interactions, the underlying neural mechanisms are not well understood. Here we apply a dyadic approach to investigate face-based emotional contagion using hyperscanning and functional near infrared spectroscopy. One participant, 'Video-Watcher', watched emotionally provocative silent videos and the other partner, 'Face-Watcher', watched the face of the Video-Watcher. The library of videos included 'neutrals' featuring landscapes, 'adorables', featuring animal antics, and 'creepy crawlies' featuring spiders, worms and states of decay. Ratings of the positive and negative emotional experience were acquired simultaneously for both participants. The correlation of emotional ratings between the 'Video and Face-Watchers' (r = 0.60) confirmed that the emotion was transmitted by facial expressions and taken as confirmation of emotional contagion. Facial Action Units of the 'Video Watcher’s' face were taken as the stimuli for the 'Face- Watcher’s' brain. Neural responses included a complex of right TPJ regions consisting of supramarginal gyrus, angular gyrus, and superior temporal gyrus (p < 0.05, FDR corrected peak voxel) consistent with a face-emotion-sensing function. A dyadic model of emotion contagion and face sensing is suggested where human brains and faces are tuned to send and receive emotional states. Professor Joy Hirsch, Yale University, USA and UCL, UK
Professor Joy Hirsch, Yale University, USA and UCL, UKHumans, by nature, are irresistibly social. The overarching goal of Professor Hirsch's research is to understand the fundamental neural mechanisms that underlie these social behaviours. Live faces serve as the primary social stimuli for this research. Two-person neuroimaging based on near infrared spectroscopy is configured for real-time live face-to-face hyperscanning including simultaneous measures of facial classifications, eye-tracking, pupillometry, EEG, and behavioural reports of subjective effects. Emerging theoretical frameworks lead to high-level models of multi-function dyadic face and social processes including cross-brain neural synchrony suggesting that brain-to-brain coupled mechanisms are integrated components of dynamic face and social processing. |
---|---|
13:00 - 13:15 | Discussion |
13:15 - 13:45 |
Face to face interaction in chimpanzee and human mother-infant-dyads
In WEIRD (Western, educated, industrialized, rich, democratic) societies, face-to-face interactions are an important component of mother-infant interactions. While facing each other, they engage in a variety of non-verbal means of communication, including facial expressions, such as smiles, and eye contact. It has been suggested that face-to-face interactions are unique to humans, as they are a form of emotional communication indicative of the long-lasting bond between mothers and their infants. However, other great apes, including the closest relatives of humans, the chimpanzees, also form close and long-lasting relationships with their offspring, but little is known about the role of face-to-face contact in mother-infant interactions. In terms of quantitative measures, research has shown that gazing bouts in chimpanzees are shorter than in human mother-infant dyads. However, less is known about the quality of such face-to-face contact, as it remains an open question what chimpanzee mothers or their infants do while engaging in such interactions. Therefore, the group studied face-to-face contact between mothers and their infants in captive chimpanzees and humans from a Western society, to investigate both qualitative and quantitative aspects of their face-to-face interactions. They predicted that 1) human mothers spend more time engaging in face-to-face contact with their infants, and 2) both species vary with regard to the types of behaviours used during such gazing bouts. They used focal observations to video-record spontaneous interactions of ten human and 8 chimpanzee mothers and their 6-months-old infants, and coded facial expressions, facial touches, and the duration of face-to-face contact. If feasible, eye contact was also coded. As predicted, human dyads spend significantly more time in face-to-face interactions than chimpanzees. Species also differed with regard to the behaviours shown during face-to-face contact: human mothers frequently engaged in direct eye contact and to some extent, also mirrored facial expressions of their infant, while chimpanzee mothers often touched the face of their infants, eg, during grooming bouts. Thus, these findings indicate that during face-to-face contact, human mothers use different behaviors to communicate with their infants, while in chimpanzees face-to-face contact seems to be rather a by-product of the mother’s bodily actions on the infant, rather than a communicative situation. Professor Katja Liebal, Leipzig University, Germany
Professor Katja Liebal, Leipzig University, GermanyKatja Liebal is a biologist and comparative psychologist, interested the evolution of human language. She uses a comparative approach to investigate if and which building blocks of human language might be already present in our closest relatives, the nonhuman apes. She studies the variability of individual repertoires, the intentional and flexible usage of signals across social contexts to achieve different goals, and how young apes acquire their communicative repertoires. In this context, she also investigates the facial communication in interactions between mothers and their infants, over the infants’ first year of life. She works at Leipzig University, where she is currently heading the LeipzigLab, an interdisciplinary initiative to encourage collaborations between scholars from the humanities and natural sciences. |
13:45 - 14:00 | Discussion |
14:00 - 14:30 | Tea |
14:30 - 15:00 |
Conversation as the platform where minds meet
The game-changer that afforded human culture is widely considered to be language. But turning thoughts into vocalizations was only part of the story. The real innovation was the development of a common language – a sounds-to-meanings map shared across people that enabled one idea to leap from one mind to the next. Conversation provided a platform for minds to meet, allowing information to leap from one brain to the next. But how does conversation work? We know very little about conversation because the fields of psychology and neuroscience have focused near-exclusively on the individual. In part this was because interacting minds create rich, complex data that were challenging to capture and analyse. However, recent technological advances have made the study of conversation newly tractable. Here, Professor Wheatley will present research demonstrating how face to face conversation aligns minds by increasing neural synchrony and how neural synchrony, in turn, changes conversation. She will also discuss evidence for why breaking interpersonal synchrony, from time to time, may be important to striking the right balance between shared and independent modes of thought. Professor Thalia Wheatley, Dartmouth College, USA
Professor Thalia Wheatley, Dartmouth College, USAThalia Wheatley is the Lincoln Filene Professor in Human Relations at Dartmouth College, external professor at the Santa Fe Institute, and inaugural Director of the Consortium for Interacting Minds. Her research programme investigates how people communicate with each other, how social networks form, and why social connection is so important for mental health. |
15:00 - 15:15 | Discussion |
15:15 - 15:45 |
Quirky conversations: how patients with schizophrenia do dialogue differently
Conversation is a collaborative process where speakers and listeners produce information together, continuously coordinating to incrementally co-construct the evolving content. Smooth turn exchange is achieved through tight coordination of interlocutors’ verbal and non-verbal behaviour and becomes problematic when this deviates from expectations. Patients with a diagnosis of schizophrenia are one of the most socially excluded in society. It is well documented that they have problems with language and social cognitive skills, including with self-monitoring and turn-taking, yet little research has investigated how these impact interaction. Interactions involving patients offer an opportunity to observe the strategies that people employ when interaction is problematic, and shed light on how ‘normal’ interactions are managed. Using data from a corpus of triadic conversations containing 20 dialogues involving one patient with a diagnosis of schizophrenia and two healthy controls and 20 dialogues involving three healthy participants, Dr Howes will show that dialogues involving a patient differ from controls in terms of turn-taking, disfluencies, gesture and repair. Furthermore, the presence of the patient influences the behaviour of the healthy controls they interact with. The data supports the idea that disfluencies are communicative solutions, not problems. This unique data demonstrates that not only are there communication difficulties in schizophrenia but they also impact on social interactions more broadly, thus providing new insights into the social deficits of this complex disorder. Dr Christine Howes, University of Gothenburg, Sweden
Dr Christine Howes, University of Gothenburg, SwedenChristine Howes is a senior lecturer in the Centre for Linguistic Theory and Studies in Probability (CLASP) at the Department of Philosophy, Linguistics and Theory of Science, University of Gothenburg, Sweden. Her main interest is linguistic interaction and how people use the resources provided by language in conversation to coordinate their understanding, with a focus on phenomena that are often considered to be outside the remit of linguistics, such as repair, split utterances and laughter. |
15:45 - 16:00 | Discussion |
Chair
Professor Antonia Hamilton, UCL, UK
Professor Antonia Hamilton, UCL, UK
Professor Hamilton’s research has ranged across domains, from computational motor control to neurodevelopmental disorders and cognitive neuroscience. Her current focus is on understanding the brain and cognitive mechanisms of human nonverbal social interactions such as imitation and gaze in people with and without autism. This work uses new technologies to study real world social interaction, including virtual reality, motion capture and fNIRS as well as more traditional cognitive approaches. By using these innovative methods together with a strong focus on fundamental theories, Hamilton works to uncover basic mechanisms of human social behaviour and the differences in social behaviour in autism.
08:00 - 08:30 |
Using social agents to study interaction without sacrificing experimental control
Face-to-face social interaction is akin to a dance. When it works well, participants seem tightly enmeshed, rapidly detecting and responding to each other’s words, movements and expressions. Studying such interactivity often involves a compromise between experimental rigor and ecological validity. Traditional scientific methods typically require breaking or altering natural interactivity in order to establish causality. For example, influential research on the role of emotional expressions in negotiations comes from people interacting with scripted computer agents where, unbeknownst to participants, their attempts at interaction and reciprocity are ignored by their computer partner. Less extreme approaches at experimental control are critiqued as purely correlation or lacking scientific rigor. In this talk Professor Gratch will discuss how agent technology offers a middle ground where candidate interactional theories can be learned automatically from actual interactions then incorporated into the behavior of 'virtual confederates'. He will illustrate this approach to the study of rapport and emotions in negotiation. Professor Jonathan Gratch, University of Southern California, USA
Professor Jonathan Gratch, University of Southern California, USAJonathan Gratch is a Research Full Professor of Computer Science. Psychology and Media Arts and Practice at the University of Southern California (USC) and Director for Virtual Human Research at USC’s Institute for Creative Technologies. He completed his PhD in Computer Science at the University of Illinois in Urbana-Champaign in 1995. Dr Gratch’s research focuses on computational models of human cognitive and social processes, especially emotion, and explores these models’ role advancing psychological theory and in shaping human-machine interaction. He is the founding Editor-in-Chief (retired) of IEEE’s Transactions on Affective Computing, founding Associate Editor of Affective Science, Associate Editor of Emotion Review and the Journal of Autonomous Agents and Multiagent Systems, and former President of the Association for the Advancement of Affective Computing (AAAC). He is a Fellow of AAAI, AAAC, and the Cognitive Science Society, and a ACM SIGART Autonomous Agents Award recipient. |
---|---|
08:30 - 08:45 | Discussion |
08:45 - 09:15 |
On the multimodal nature of turn-taking: the interplay of talk, gaze and gesture in the coordination of turn transitions
Turn-taking is a fundamental and universal feature of conversation. A central question in research on turn-taking is how speakers recognize the points of possible turn completion where transitions occur. Over the last 50 years, a cumulative body of research in the field of conversation analysis (CA) has investigated turn-taking through naturalistic observation and qualitative description, identifying the precise linguistic cues that signal the relevance of transition. In this model, visible bodily actions play only a minimal role. Quantitative research outside the CA tradition has, however, argued that visual cues are in fact central to the organization of turn-taking, but these studies have tended to employ relatively coarse measures that compromise their informativeness. In this talk, Dr Kendrick will reconcile these disparate strands of research and present new evidence for the role that gaze and gesture play in the organization of turn-taking. The data come from a corpus of dyadic casual conversations in which participants wore eye-tracking glasses for direct measurement of their gaze while they were also recorded by multiple cameras for a fine-grained analysis of their gestures. Dr Kendrick will combine both quantitative and qualitative conversation-analytic methods to show how the direction of a speaker’s gaze and the temporal organization of their gestures influence the relevance of transition between speakers on a turn-by-turn basis. The findings, Dr Kendrick will argue, demonstrate the fundamentally multimodal nature of the human turn-taking system, a basic fact which has far-reaching implications for theories of language processing and the evolution of human communication. Dr Kobin H Kendrick, University of York, UK
Dr Kobin H Kendrick, University of York, UKKobin H Kendrick is conversation analyst who studies language use and embodied action in face-to-face social interaction. He earned his PhD in Linguistics at the University of California, Santa Barbara and was a staff scientist in the Language and Cognition Department at the Max Planck Institute for Psycholinguistics in the Netherlands before he moved to the University of York where he is currently a Senior Lecturer in Linguistics. His research employs observational and comparative methods to investigate the basic infrastructure of social interaction, including turn-taking, action-sequencing and repair, and has recently begun to study the multimodal nature of prosocial behavior in face-to-face interaction. |
09:15 - 09:30 | Discussion |
09:30 - 10:00 | Coffee |
10:00 - 10:30 |
Getting attuned: social synchrony in early human development
Caregiver-infant interactions are characterized by interpersonal rhythms at different timescales, from nursery rhymes and interactive games to daily routines. These rhythms make the social environment more predictable for young children and facilitate interpersonal biobehavioral synchrony with their caregivers. In adults, the brain rhythms of interaction partners entrain to speech rhythms, supporting mutual comprehension and communication. Professor Hoehl will present recent evidence that this is also the case in the infant brain, especially when babies are addressed directly through infant-directed speech in naturalistic interactions. EEG allows us to disentangle acoustic properties of infant-directed speech that support infant neural tracking, specifically prosody. Through using simultaneous measures of neural and physiological rhythms, eg, dual-fNIRS and dual-ECG, from caregiver and infant during live face-to-face interactions, we can further deepen our understanding of early interactional dynamics and their reciprocal nature. Professor Hoehl will present recent research identifying factors supporting the establishment of caregiver-infant neural synchrony, such as affectionate touch and vocal turn-taking. She will further discuss the functional links and dissociations between caregiver-infant synchrony on the neural and physiological levels. Both aspects of social synchrony are enhanced in a face-to-face interaction compared to a mutual passive viewing condition. Yet, in contrast to neural synchrony, physiological synchrony between caregiver and infant is related to infant affect. She will outline potential implications of this work and point out important future directions. Professor Stefanie Hoehl, University of Vienna
Professor Stefanie Hoehl, University of ViennaStefanie Hoehl is head of the Research Unit of Developmental Psychology at the University of Vienna and leads the Wiener Kinderstudien lab. She completed her undergraduate studies in psychology at Heidelberg University and received her PhD from the University of Leipzig in 2008. She completed her Habilitation at the University of Heidelberg in 2013. From 2016 to 2019 she led the Max Planck Research Group on Early Social Cognition at the MPI for Human Cognitive and Brain Sciences in Leipzig. Her research focuses on social and cognitive development in early childhood. Her research is based on an interactionist perspective and lies at the intersection of developmental psychology and cognitive neuroscience. Image credit: Kerstin Flake, MPI CBS |
10:30 - 10:45 | Discussion |
10:45 - 11:15 |
Towards a neurocognitive understanding of language production in social interaction
The cognitive processes underlying language production are typically investigated in experimental settings testing one speaker at a time. Yet, we typically speak in social interactions coordinating speaking, and listening, with two or more individuals. In this talk, Dr Anna Kuhlen will focus on how a speaker’s lexical access is shaped by social interaction. She will present experimental approaches that scale up classic picture-naming tasks typically used in speech production research to joint action settings in which two speakers take turns speaking. In these settings we can observe that a speaker’s latencies in naming pictures, a proxy for the ease of lexical access, are modulated not only by the semantic context generated by the speaker’s own prior utterances, but also by the semantic context generated by the partner’s utterances. On the one hand, this can lead to semantic interference and impede own speech production (Kuhlen and Abdel Rahman, 2017). On the other hand, a partner’s utterances can also facilitate own speech production when joint picture naming is embedded in a setting in which picture naming becomes part of a meaningful communicative game (Kuhlen and Abdel Rahman, 2021). These findings demonstrate that processes of speech production are shaped by the social interaction as they become part of a joint action. Moreover, observations made in single-subject settings might not transfer to social and communicative settings, highlighting the importance of investigating language production in settings in which it typically occurs, namely in social interaction. Dr Anna K Kuhlen, Humboldt University of Berlin, Germany
Dr Anna K Kuhlen, Humboldt University of Berlin, GermanyDr Anna K Kuhlen (PhD Experimental Psychology, Stony Brook University, USA) investigates the cognitive architecture and its neural implementation that enable partner-adapted verbal and nonverbal communication. To do so Anna K Kuhlen adapts methods from experimental psychology and cognitive neuroscience and connects our understanding of how humans produce language to our understanding of how humans coordinate in social interaction. Anna K Kuhlen researches and teaches at the Institute of Psychology at the Humboldt University of Berlin, Germany. |
11:15 - 11:30 | Discussion |
Chair
Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The Netherlands
Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The Netherlands
Dr Judith Holler is Associate Professor at Radboud University Nijmegen and leads the research group 'Communication in Social Interaction (CoSI)' (Donders Institute for Brain, Cognition and Behaviour and Max Planck Institute for Psycholinguistics). With her group, she investigates human language in face-to-face social interaction. Her focus is on language as a multimodal, audio-visual phenomenon, and specifically on the semantic and pragmatic contributions of visual bodily signals (hands, head, gaze and face) to interlocutors’ language use and comprehension in dialogue. Dr Holler’s research focus on situated psycholinguistics is based on an interdisciplinary approach combining the micro-analysis of multimodal language, CA-informed corpus analyses of conversational interaction, and methods from psycholinguistics and neuroscience. Dr Holler’s research has been funded by the Economic and Social Research Council, the Leverhulme Trust, the British Academy, Parkinson’s UK, Marie Skłodowska-Curie Actions, and she has recently been awarded a prestigious European Research Council consolidator grant to pursue her research.
12:30 - 13:00 |
The role of emotional expressions during social interactions in humans and great apes
Social species’ capacity to express, recognize and share emotions enables them to navigate their social worlds and forms a core component of what it means to be socially competent and healthy. In order to evaluate another’s trustworthiness, they rely on various indicators of a safe interaction including emotional expressions. The focus of most emotion research has been on explicit, isolated facial expressions. However, during interactions in real life, expressions can be more subtle, mixed and ambiguous and go beyond the facial action units (eg blush, tears, pupil dilation). Further, the face is not perceived in isolation, but in the context of the body. In this talk, Dr Kret will present a series of studies in humans and great apes ranging from computer tasks to observations in the natural environment, giving insight in the role of emotional expressions, in all their complexity, in social interactions. Professor Mariska Kret, Leiden University, The Netherlands
Professor Mariska Kret, Leiden University, The NetherlandsMariska Kret works as a full professor at Leiden University, Cognitive Psychology unit and leads the CoPAN lab. Dr Kret started her PhD trajectory in 2007 in Tilburg (NL) during which she studied the perception of body language. After that she proceeded with a postdoc at Kyoto University during which she investigated emotions in chimpanzees. Since 2015 she has had a permanent position at Leiden University. From investigations on expressions in healthy people, her research expanded to comparative studies in great apes and patients with mental disorders. She is particularly interested in expressions that are genuine, beyond control and automatic. In her research she typically combines different methods including fMRI, psychophysiology, eye-tracking, pupillometry, hormonal administration, behavioral observations and questionnaires assessing individual differences in personality. Her research is mainly funded by an ERC Starting grant and national funding (NWO VIDI). |
---|---|
13:00 - 13:15 | Discussion |
13:15 - 13:45 |
How interaction shapes communication systems: insights from birdsong and artificial language learning in humans
Communication systems are constantly shaped by their users. This is not only true to human language but also to other socially learned systems like birdsong. The processes of individual learning, social interaction and cross-generational cultural transmission mediate the relationship between individual cognition and the structural features of communication systems. Dr Feher studies these processes in a comparative framework using atypical songs and languages. Like children, juvenile songbirds learn to vocalise in species-specific ways from their parents. In the absence of social and acoustic input, they improvise an isolate song. These songs are highly variable across individuals, but they contain similar features not normally seen in wild-type songs. When acquired by juvenile birds, isolate songs are modified in ways that reflect the innate biases of learners for wild-type song features. In humans, an atypical linguistic feature commonly used to study language evolution and language change is unpredictable variation. While variation is universally present in natural languages, unpredictable variation is extremely rare. Dr Feher uses artificial languages that exhibit this feature to observe people’s tendency to eliminate the unpredictability in their language. Learning, interaction and transmission all amplify learners’ biases and drive the emergence of species-typical features (ie wild-type song features and conditioned variation), but they favour different aspects of the communication systems and sometimes exert opposing forces on the way these features evolve. Dr Feher will discuss a number of experiments in songbirds and humans that have explored the individual and combined effects of interaction and transmission. Dr Olga Feher, University of Warwick, UK
Dr Olga Feher, University of Warwick, UKDr Olga Feher received her PhD from CUNY in New York. Her doctoral thesis focused on the evolution of song culture in isolated zebra finch colonies. She won a postdoctoral fellowship at the RIKEN Brain Science Institute in Japan where she studied the role of self-feedback during development and its potential impact on birdsong evolution. She then received a Newton International Fellowship from the Royal Society and British Academy and joined the Centre for Language Evolution at the University of Edinburgh to study cultural evolutionary processes in humans using artificial languages. In 2017, she joined the Psychology Department at Warwick University where she continues to investigate how interaction and cultural transmission shape language. She is also dreaming of one day going back to studying songbirds. |
13:45 - 14:00 | Discussion |
14:00 - 14:30 | Tea |
14:30 - 15:00 |
Gesture, spatial cognition and the evolution of language
Recent work in primatology has established that, with the exception of humans, all of the Hominidae, the great apes, use a gestural form of communication as their primary interactional medium. Normal phylogenetic reasoning therefore implies that our own common ancestor with the two chimpanzee species was a gesturer, and this is inline with paleontological evidence from early Homo erectus. Today, although our primary communication channel is vocal, humans gesture frequently and spontaneously while speaking, and the earliest form of communication in infancy is gestural. All this suggests that an early hominin gestural system may have provided a base for vocal language development. Now, gesture is a spatial mode of communication largely about space, and this raises questions about the role of spatial cognition in language. Recent work in neuroscience has shown that specialized cells in the hippocampus play a central role in spatial cognition, and that humans have partly repurposed the hippocampus for language and memory. Moreover, it has long been noted that spatial concepts underlie much linguistic structure. Putting this altogether suggests that gesture may have played a crucial role in language evolution by importing spatial cognition into the heart of grammar. Professor Stephen C Levinson FBA, Max Planck Institute for Psycholinguistics, Nijemegen, The Netherlands
Professor Stephen C Levinson FBA, Max Planck Institute for Psycholinguistics, Nijemegen, The NetherlandsStephen C Levinson is emeritus director of the Max Planck Institute for Psycholinguistics and Professor emeritus of Comparative Linguistics at Radboud University Nijmegen. He is the author of over 300 publications on language and cognition, including Politeness (CUP), Pragmatics (CUP), Presumptive Meanings (MIT), Space in Language & Cognition (CUP), and has (co-)edited the collections Grammars of Space (CUP), Language Acquisition and Conceptual development (CUP), Culture and Evolution (MIT), Roots of Sociality (Berg) and Turn-Taking in Human Communicative Interaction (Frontiers Media). His current research is focused on the cognitive foundations for communication, the relation of language to general cognition, and the evolution of human communication. He is a fellow of the British Academy, the Academia Europaea, the Australian Academy of the Humanities and was awarded the 2020 Huxley Medal of the Royal Anthropological Institute. He has done extensive fieldwork on languages in India, Australia, Mexico and Papua New Guinea. |
15:00 - 15:15 | Discussion |
15:15 - 16:00 |
Panel discussion
Professor Antonia Hamilton, UCL, UK
Professor Antonia Hamilton, UCL, UKProfessor Hamilton’s research has ranged across domains, from computational motor control to neurodevelopmental disorders and cognitive neuroscience. Her current focus is on understanding the brain and cognitive mechanisms of human nonverbal social interactions such as imitation and gaze in people with and without autism. This work uses new technologies to study real world social interaction, including virtual reality, motion capture and fNIRS as well as more traditional cognitive approaches. By using these innovative methods together with a strong focus on fundamental theories, Hamilton works to uncover basic mechanisms of human social behaviour and the differences in social behaviour in autism. Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The Netherlands
Dr Judith Holler, Donders Institute (Radboud University) and MPI for Psycholinguistics, The NetherlandsDr Judith Holler is Associate Professor at Radboud University Nijmegen and leads the research group 'Communication in Social Interaction (CoSI)' (Donders Institute for Brain, Cognition and Behaviour and Max Planck Institute for Psycholinguistics). With her group, she investigates human language in face-to-face social interaction. Her focus is on language as a multimodal, audio-visual phenomenon, and specifically on the semantic and pragmatic contributions of visual bodily signals (hands, head, gaze and face) to interlocutors’ language use and comprehension in dialogue. Dr Holler’s research focus on situated psycholinguistics is based on an interdisciplinary approach combining the micro-analysis of multimodal language, CA-informed corpus analyses of conversational interaction, and methods from psycholinguistics and neuroscience. Dr Holler’s research has been funded by the Economic and Social Research Council, the Leverhulme Trust, the British Academy, Parkinson’s UK, Marie Skłodowska-Curie Actions, and she has recently been awarded a prestigious European Research Council consolidator grant to pursue her research. |