Gaschler, A., Huth, K., Giuliani, M., Kessler, I., De Ruiter, J. P., & Knoll, A. (2012, March 5-8). Modelling State of Interaction from Head Poses for Social Human-Robot Interaction. In Proceedings of the Gaze in Human-Robot Interaction Workshop held at the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2012), Boston, US

Fill in this form to receive a download link:

In this publication, we analyse how humans use head pose in various states of an interaction, in both human-human and human-robot observations. Our scenario is the short-term, every-day interaction of a customer ordering a drink from a bartender. To empirically study the use of head pose in this scenario, we recorded 108 such interactions in real bars. The analysis of these recordings shows, (i) customers follow a defined script to order their drink—attention request, or- dering, closing of interaction—and (ii) customers use head pose to nonverbally request the attention of the bartender, to signal the ongoing process, and to close the interaction.

Based on these findings, we design a hidden Markov model that reflects the typical interaction states in the bar scenario and implement it on the human-robot interaction system of the European JAMES project. We train the model with data from an automatic head pose estimation algorithm and additional body pose information. Our evaluation shows that the model correctly recognises the state of interaction of a customer in 78% of all states. More specifically, the model recognises the interaction state “attention to bartender” with 83% and “attention to another guest” with 73% correctness, providing the robot sufficient knowledge to begin, perform, and end interactions in a socially appropriate way.