Posts by Collection

portfolio

publications

A deep neural network model for multi-view human activity recognition

Published in PLoS ONE 17(1), 2022

A deep learning model for recognizing human activities from multiple camera views.

Recommended citation: Putra, P.U., Shima, K., & Shimatani, K. (2022). A deep neural network model for multi-view human activity recognition. *PLoS ONE*, 17(1), e0262181.

Children’s Attention Function Evaluation System by Go/NoGo Game CatChicken

Published in 23rd SICE System Integration Division Conference (SICE SI 2022), CD-ROM, 2022

A Go/NoGo game–based system for evaluating children’s attention function using the CatChicken task.

Recommended citation: Yusei, D., Sakata, M., Mikami, H., Putra Utama Prasetia, Shima, K., & Shimatani, K. (2022). Children’s Attention Function Evaluation System by Go/NoGo Game CatChicken (Go/NoGoゲームCatChickenによる児の注意機能評価システム). *23rd SICE System Integration Division Conference (SICE SI 2022)*, ROMBUNNO.2A2-E11.

talks

Social signal processing in the Normativity Lab

Published:

In this talk, I gave a brief introduction to social signal processing and explained how it can benefit researchers studying human behavior and clinical psychology more broadly.

The dynamics of human action and perception during the coordination of a ball interception task

Published:

This study aims to investigate how humans coordinate and perceive actions in a dynamic ball interception task. In this task, two individuals must continuously coordinate their actions to keep bouncing a table tennis ball towards the wall. We tracked the body and eye movements of individuals to analyze their coordination. From body and eye movement data, we extracted eye-movement and action features, such as anticipatory looks, ball pursuit duration, and kinematic energy of racket. To understand individuals’ movement patterns, the Lyapunov spectrum of their racket movement was analyzed. In addition, a combination of a Hidden Markov Model and action features was employed to identify the transition from stable to semi-stable coordination states. Our preliminary findings suggested that participants’ racket movements showed chaotic behavior in both short and long coordination sequences. This behavior may result from their attempts to compensate for their partner’s actions or their own errors. We also observed significant differences in eye and body movements when transitioning from stable to semi-stable coordination. In the semi-stable state, the duration of pursuit became shorter, and the movement of the racket became more irregular compared to the stable state. Overall, our study offers a quantitative framework for understanding the dynamics of human movement and perception during realistic interception tasks.

Should I trust facial expression recognition models?

Published:

Here, I presented practical guidance on how to use deep-learning–based facial expression recognition models. In the talk, I explained to the community which aspects of these models are useful and what their limitations are. I also highlighted factors they should be aware of, such as the lateral position of the face in the image and the demographic background of participants whose facial expressions are being analyzed automatically.

Decoding joint action success through eye movements: A data-driven approach

Published:

Humans have coordinated with one another, animals, and machines for centuries, yet the mechanisms that enable seamless collaboration without extensive training remain poorly understood. Previous research on human-human and human-agent coordination—often relying on simplified paradigms—has identified variables such as action prediction, social traits, and action initiation as key contributors to successful coordination. However, how these factors interact and influence coordination success in ecologically valid settings remains unclear. In this study, we reverse-engineered the coordination process in a naturalistic, turn-taking table tennis task while controlling for individual skill levels. We found that well-calibrated internal models—reflected in individuals’ ability to predict their own actions—strongly predict coordination success, even without prior extensive training. Using multimodal tracking of eye and body movements combined with machine learning, we demonstrate that dyads with similarly accurate self-prediction abilities coordinated more effectively than those with lower or less similar predictive skills. These well-calibrated individuals were also better at anticipating the timing of their partners’ actions and relied less on visual feedback to initiate their own, enabling more proactive rather than reactive responses. These findings support motor control theories, suggesting that internal models used for individual actions can extend to social interactions. This study introduces a data-driven framework for understanding joint action, with practical implications for designing collaborative robots and training systems that promote proactive control. More broadly, our approach—combining ecologically valid tasks with computational modeling—offers a blueprint for investigating complex social interactions in domains such as sports, robotics, and rehabilitation.

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.