Assessing Facial Expressions in Virtual Reality Environments

Catarina Runa Miranda, Verónica Costa Orvalho

Abstract

Humans rely on facial expressions to transmit information, like mood and intentions, usually not provided by the verbal communication channels. The recent advances in Virtual Reality (VR) at consumer-level (Oculus VR 2014) created a shift in the way we interact with each other and digital media. Today, we can enter a virtual environment and communicate through a 3D character. Hence, to the reproduction of the users’ facial expressions in VR scenarios, we need the on-the-fly animation of the embodied 3D characters. However, current facial animation approaches with Motion Capture (MoCap) are disabled due to persistent partial occlusions produced by the VR headsets. The unique solution available for this occlusion problem is not suitable for consumer-level applications, depending on complex hardware and calibrations. In this work, we propose consumer-level methods for facial MoCap under VR environments. We start by deploying an occlusions-support method for generic facial MoCap systems. Then, we extract facial features to create Random Forests algorithms that accurately estimate emotions and movements in occluded facial regions. Through our novel methods, MoCap approaches are able to track non-occluded facial movements and estimate movements in occluded regions, without additional hardware or tedious calibrations. We deliver and validate solutions to facilitate face-to-face communication through facial expressions in VR environments.

References

  1. Biocca, F. (1997). The cyborg's dilemma: Progressive embodiment in virtual environments. Journal of Computer-Mediated Communication, 3(2):0-0.
  2. Bombari, D., Schmid, P. C., Schmid Mast, M., Birri, S., Mast, F. W., and Lobmaier, J. S. (2013). Emotion recognition: The role of featural and configural face information. The Quarterly Journal of Experimental Psychology, 66(12):2426-2442.
  3. Breiman, L. (2001). Random forests. Machine Learning, 45(1):5-32.
  4. Cao, C., Hou, Q., and Zhou, K. (2014). Displaced dynamic expression regression for real-time facial tracking and animation. ACM Transactions on Graphics (TOG), 33(4):43.
  5. Cao, C., Weng, Y., Lin, S., and Zhou, K. (2013). 3d shape regression for real-time facial animation. ACM Trans. Graph., 32(4):41.
  6. Eisenbarth, H. and Alpers, G. W. (2011). Happy mouth and sad eyes: scanning emotional facial expressions. Emotion, 11(4):860.
  7. Ekman, P. and Friesen, W. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto.
  8. Ekman, P. and Friesen, W. V. (1975). Unmasking the face: A guide to recognizing emotions from facial cues.
  9. Fuentes, C. T., Runa, C., Blanco, X. A., Orvalho, V., and Haggard, P. (2013). Does my face fit?: A face image task reveals structure and distortions of facial feature representation. PloS one, 8(10):e76805.
  10. Jack, R. E. and Jack, R. E. (2013). Culture and facial expressions of emotion Culture and facial expressions of emotion. Visual Cognition, 00(00):1-39.
  11. Kilteni, K., Groten, R., and Slater, M. (2012). The sense of embodiment in virtual reality. Presence: Teleoperators and Virtual Environments, 21(4):373-387.
  12. Lang, C., Wachsmuth, S., Hanheide, M., and Wersing, H. (2012). Facial communicative signals. International Journal of Social Robotics, 4(3):249-262.
  13. Li, H., Trutoiu, L., Olszewski, K., Wei, L., Trutna, T., Hsieh, P.-L., Nicholls, A., and Ma, C. (2015). Facial performance sensing head-mounted display. ACM Transactions on Graphics (Proceedings SIGGRAPH 2015), 34(4).
  14. Li, H., Yu, J., Ye, Y., and Bregler, C. (2013). Realtime facial animation with on-the-fly correctives. ACM Transactions on Graphics, 32(4).
  15. Loconsole, C., Runa Miranda, C., Augusto, G., Frisoli, G., and Costa Orvalho, v. (2014). Real-time emotion recognition: a novel method for geometrical facial features extraction. 9th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP 2014), 01:378-385.
  16. Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. (2010). The extended cohnkanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages 94-101. IEEE.
  17. Magnenat-Thalmann, N., Primeau, E., and Thalmann, D. (1988). Abstract muscle action procedures for human face animation. The Visual Computer, 3(5):290-297.
  18. McCloud, S. (1993). Understanding comics: The invisible art. Northampton, Mass.
  19. McCloud, S. (2006). Making Comics: Storytelling Secrets Of Comics, Manga And Graphic Novels Author: Scott McCloud, Publisher: William Morrow. William Morrow Paperbacks.
  20. OpenCV (2014).
  21. Pandzic, I. S. and Forchheimer, R. (2003). MPEG-4 facial animation: the standard, implementation and applications. Wiley. com.
  22. Parikh, R., Mathai, A., Parikh, S., Sekhar, G. C., and Thomas, R. (2008). Understanding and using sensitivity, specificity and predictive values.Indian journal of ophthalmology, 56(1):45.
  23. Parke, F. I. and Waters, K. (1996). Computer facial animation, volume 289. AK Peters Wellesley.
  24. Pighin, F. and Lewis, J. (2006). Performance-driven facial animation. In ACM SIGGRAPH.
  25. R Core Team (2013). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0.
  26. Rodriguez, J., Perez, A., and Lozano, J. (2010). Sensitivity analysis of k-fold cross validation in prediction error estimation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(3):569-575.
  27. Saragih, J. M., Lucey, S., and Cohn, J. F. (2011). Deformable model fitting by regularized landmark meanshift. International Journal of Computer Vision, 91(2):200-215.
  28. Slater, M. (2014). Grand challenges in virtual environments. Frontiers in Robotics and AI, 1:3.
  29. von der Pahlen, J., Jimenez, J., Danvoye, E., Debevec, P., Fyffe, G., and Alexander, O. (2014). Digital ira and beyond: creating real-time photoreal digital actors. In ACM SIGGRAPH 2014 Courses, page 1. ACM.
  30. Weise, T., Bouaziz, S., Li, H., and Pauly, M. (2011). Realtime performance-based facial animation. ACM Transactions on Graphics (TOG), 30(4):77.
Download


Paper Citation


in Harvard Style

Runa Miranda C. and Costa Orvalho V. (2016). Assessing Facial Expressions in Virtual Reality Environments . In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2016) ISBN 978-989-758-175-5, pages 486-497. DOI: 10.5220/0005716604860497


in Bibtex Style

@conference{visapp16,
author={Catarina Runa Miranda and Verónica Costa Orvalho},
title={Assessing Facial Expressions in Virtual Reality Environments},
booktitle={Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2016)},
year={2016},
pages={486-497},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005716604860497},
isbn={978-989-758-175-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2016)
TI - Assessing Facial Expressions in Virtual Reality Environments
SN - 978-989-758-175-5
AU - Runa Miranda C.
AU - Costa Orvalho V.
PY - 2016
SP - 486
EP - 497
DO - 10.5220/0005716604860497