Talks and presentations

Decoding joint action success through eye movements: A data-driven approach

August 25, 2025

Talk, 47th European Conference on Visual Perception, Mainz, Germany

Humans have coordinated with one another, animals, and machines for centuries, yet the mechanisms that enable seamless collaboration without extensive training remain poorly understood. Previous research on human-human and human-agent coordination—often relying on simplified paradigms—has identified variables such as action prediction, social traits, and action initiation as key contributors to successful coordination. However, how these factors interact and influence coordination success in ecologically valid settings remains unclear. In this study, we reverse-engineered the coordination process in a naturalistic, turn-taking table tennis task while controlling for individual skill levels. We found that well-calibrated internal models—reflected in individuals’ ability to predict their own actions—strongly predict coordination success, even without prior extensive training. Using multimodal tracking of eye and body movements combined with machine learning, we demonstrate that dyads with similarly accurate self-prediction abilities coordinated more effectively than those with lower or less similar predictive skills. These well-calibrated individuals were also better at anticipating the timing of their partners’ actions and relied less on visual feedback to initiate their own, enabling more proactive rather than reactive responses. These findings support motor control theories, suggesting that internal models used for individual actions can extend to social interactions. This study introduces a data-driven framework for understanding joint action, with practical implications for designing collaborative robots and training systems that promote proactive control. More broadly, our approach—combining ecologically valid tasks with computational modeling—offers a blueprint for investigating complex social interactions in domains such as sports, robotics, and rehabilitation.

Should I trust facial expression recognition models?

June 04, 2025

Talk, Emotion Recognition Community in Developmental Psychology, Online

Here, I presented practical guidance on how to use deep-learning–based facial expression recognition models. In the talk, I explained to the community which aspects of these models are useful and what their limitations are. I also highlighted factors they should be aware of, such as the lateral position of the face in the image and the demographic background of participants whose facial expressions are being analyzed automatically.

The dynamics of human action and perception during the coordination of a ball interception task

August 02, 2024

Talk, Dynamics Days Europe, Bremen, Germany

This study aims to investigate how humans coordinate and perceive actions in a dynamic ball interception task. In this task, two individuals must continuously coordinate their actions to keep bouncing a table tennis ball towards the wall. We tracked the body and eye movements of individuals to analyze their coordination. From body and eye movement data, we extracted eye-movement and action features, such as anticipatory looks, ball pursuit duration, and kinematic energy of racket. To understand individuals’ movement patterns, the Lyapunov spectrum of their racket movement was analyzed. In addition, a combination of a Hidden Markov Model and action features was employed to identify the transition from stable to semi-stable coordination states. Our preliminary findings suggested that participants’ racket movements showed chaotic behavior in both short and long coordination sequences. This behavior may result from their attempts to compensate for their partner’s actions or their own errors. We also observed significant differences in eye and body movements when transitioning from stable to semi-stable coordination. In the semi-stable state, the duration of pursuit became shorter, and the movement of the racket became more irregular compared to the stable state. Overall, our study offers a quantitative framework for understanding the dynamics of human movement and perception during realistic interception tasks.

Social signal processing in the Normativity Lab

November 23, 2023

Talk, Normativity Lab, University of Konstanz, Konstanz, Germany

In this talk, I gave a brief introduction to social signal processing and explained how it can benefit researchers studying human behavior and clinical psychology more broadly.