Publications
2024
Scholarly publications
Wu, S.
, Haque, K. I., & Yumak, Z. (2024).
ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE. In S. N. Spencer (Ed.),
Proceedings, MIG 2024 - 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games Article 15 (Proceedings, MIG 2024 - 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games). Association for Computing Machinery.
https://doi.org/10.1145/3677388.36963202023
Scholarly publications
2019
Scholarly publications
Klein, A.
, Yumak, Z., Beij, A.
, & van der Stappen, A. F. (2019).
Data-driven gaze animation using recurrent neural networks. In H. P. H. Shum, & E. S. L. Ho (Eds.),
Proc. Motion, Interaction and Games Article 4 Association for Computing Machinery.
https://doi.org/10.1145/3359566.3360054https://dspace.library.uu.nl/bitstream/handle/1874/390089/3359566.3360054.pdf?sequence=12018
Scholarly publications
Charalambous, C., Yumak, Z., & van der Stappen, A. F. (2018). Audio-driven Emotional Speech Animation. Poster session presented at Eurographics 2018.
Harel, R., Yumak, Z., & Dignum, F. P. M. (2018). Towards a generic framework for multi-party dialogue with virtual humans. Paper presented at Computer Animation and Social Agents 2018.
2017
Scholarly publications
2016
Scholarly publications
Beck, A., Yumak, Z., & Magnenat Thalmann, N. (2016). Body movement generation for virtual characters and social robots​. In Social Signal Processing
Other output
Yumak, Z., & Egges, J. (2016). Autonomous gaze animation for socially interactive virtual characters during multi-party interaction.