Virtual Humans that are capable of advanced social and environmental interaction help create immersive virtual worlds for educational role-playing games.
Learning by doing and experimenting is found to be the most effective learning method. Role-playing games can aid in the development and assessment of job skills by immersing the trainees into virtual worlds that are similar to real-life contexts. Virtual Humans with advanced social interaction capabilities are required to realize this. We are developing a virtual interaction system that enables autonomous behavior for socially aware virtual characters during multi-party social interactions, in particular for negotiation and communication skills scenarios.
Interactions between virtual humans and users so far focused on one-to-one interactions, and multi-party interactions involving several participants still remains a challenge. Our goal is to develop autonomous behavior for virtual characters driven by the sensory input in the real environment. We are in particular interested in the estimation of appropriate timing of gaze behaviors to signal turn-taking decisions as well as the believable generation of non-verbal behaviors.
We are also interested in perception of animations and evaluation of fully immersed virtual reality experiences in collaboration with experts from social sciences. With the advent of technologies that allow video-game designers to develop games that have a high degree of immersion, combining neuroscientific knowledge in this process is a logical step. For instance, while designing non-player characters (NPC) it is important to understand what aspects of the NPC’s appearance and behavior cause the player to “feel” the emotions he/she would normally feel if he/she were to have the same interaction in the real world. A good way to test if the NPC is successful is to measure the subject’s reactions to it by using both subjective and objective behavioral measures.
|Aryel Beck, Zerrin Yumak, Nadia Magnenat Thalmann: Body movement generation for virtual characters and social robots. Social Signal Processing, Cambridge University Press, 2016 (to appear)
|Zerrin Yumak and Arjan Egges: Autonomous gaze animation for socially interactive virtual characters during multi-party interaction. Motion in Games, May 2016.
|Zerrin Yumak and Nadia Magnenat Thalmann: Multi-modal and multi-party social interactions. Context Aware Human-Robot and Human-Agent Interaction, Springer Publishing, November 2015.
|Zerrin Yumak, Jianfeng Ren, Nadia Magnenat Thalmann and Junsong Yuan: Tracking and fusion for multiparty interaction with a virtual character and social robot. ACM SIGGRAPH Asia, Workshop on Autonomous Virtual Humans and Social Robots, December 2014.
|Zerrin Yumak, Jianfeng Ren, Nadia Magnenat-Thalmann, and Junsong Yuan: Modelling multi-party interactions among virtual characters, robots and humans. MIT Presence: Tele-operators and Virtual Environments (Presence), vol. 23, no. 2, 2014.
|Cathy Ennis and Arjan Egges: Perception of Approach and Reach in Combined Interaction Tasks. In Motion in Games. Heidelberg: Springer, 2013.
|Cathy Ennis, Ludovic Hoyet, Arjan Egges and Rachel McDonnell: Emotion Capture: Emotionally Expressive Characters for Games. Lecture Notes in Computer Graphics, Motion in Games 2013.