Low Bandwidth Video Streaming using FACS, Facial Expression and Animation Techniques

Dinesh Kumar, Jito Vanualailai

2016

Abstract

In this paper we describe an easy to use real-time 3D facial expression and animation system that takes the creation of individual facial expressions to the atomic level. That is instead of generating and recording known facial expressions we propose a mechanism that will allow us to create and store each atomic facial distortion. We can then combine some of these singular distortions to create meaningful expressions. FACS Action Units (AUs) is one such technique that describes the simplest visible movement, which cannot be decomposed into more basic ones. We use this as the basis for creating these atomic facial distortions. The Waters muscle based facial model has been used and extended to allow the user to calibrate and record each facial deformation as described in FACS AUs. The user can then create any facial expression by simply stating the series of AUs and its degree of activation in a controlled fashion. These features all form part of the Facial Animation System (FAS). Our FAS is implemented in such a way that enables it to be used as a low bandwidth video streaming player - a real time facial animation player driven only by FACS AUs transmitted as plain text over TCP sockets.

References

  1. Alkawaz, M. H., Mohamad, D., Basori, A. H. & Saba, T., 2015. Blend Shape Interpolation and FACS for Realistic Avatar. 3D Research (Springer Link), 6(6).
  2. Alkawaz, M. H., Mohamad, D., Rehman, A. & Basori, A. H., 2014. Facial Animations: Future Research Directions & Challenges. 3D Research (Springer Link), 5(12).
  3. Bermano, A. et al., 2013. Facial Performance Enhancement Using Dynamic Shape Space Analysis. ACM Transactions On Graphics (ACM TOG).
  4. Edge, J. D. & Maddock, S., 2001. Expressive Visual Speech using Geometric Muscle Functions. Proc. Eurographics UK, pp. 11-18.
  5. Ekman, P. & Frieson, W., 1977. Facial action coding system. Consulting Psychologists Press.
  6. Kahler, K., Haber, J. & Seidel, H. P., 2001. Geometrybased muscle modeling for facial animation.. In Proc. of Graphics Interface,, p. 37-46.
  7. Kalra, P., Mangili, A., Magnetat-Thalmann, N. & Thalmann, D., 1992. Simulation of facial muscle actions based on rational free form deformations.. In Proc. of Eurographics, pp. 59-69.
  8. Lewis, J. P. et al., 2014. Practice and Theory of Blendshape Facial Models. EUROGRAPHICS State of the Art Reports 2014.
  9. Liu, Y. & Wang, S., 2010. A virtual teleconferencing system based on face detection and 3D animation in a low-bandwidth environment. International Journal of Imaging Systems and Technology, 20(4), pp. 323-332.
  10. Magnenat-Thalmann, N., Primeau, E. & Thalmann, D., 1988. Abstract muscle action procedures for human face animation. Visual Computer, 3(5), pp. 290-297.
  11. Noh, J. & Neumann, U., 1998. A Survey of facial Modeling and Animation Techniques. Technical Report, University of Southern California.
  12. Ostermann, J., 1998. Animation of synthetic faces in MPEG-4. Computer Animation, pp. 49-51.
  13. Parke, F., 1972. Computer generated animation of faces. ACM Annual Conference.
  14. Pauly, M., 2013. Realtime Performance-Based Facial Avatars for Immersive Gameplay. Proceedings of the ACM SIGGRAPH Conference on Motion in Games 2013.
  15. Prakash, K. G. & Balasubramanian, 2010. Literature Review of Facial Modeling and Animation Techniques. International Journal of Graphics and Multimedia, 1(1), pp. 1-14.
  16. Sifakis, E., Neverov, I. & Fedkiw, R., 2005. Automatic determination of facial muscle activations from sparse motion capture marker data. ACM. Trans. on Graphics (SIGGRAPH).
  17. Tanguy, E., 2001. An Abstract Muscle Model for Three Dimensional Facial Animation. Technical Report, University of Sheffield, UK.
  18. Tena, J. R., Torre, F. D. L. & Mathews, I., 2011. Interactive Region-Based Linear 3D Face Models. SIGGRAPH.
  19. Waters, K., 1987. A Muscle Model for animating 3D facial expressions. Computer Graphics (SIGGRAPH'87), 21(4), pp. 17-24.
  20. Weise, T., Bouaziz, S., Li, H. & Pauly, M., 2011. Realtime Performance-Based Facial Animation. Transactions on Graphics (Proceedings SIGGRAPH 2011), 30(4).
  21. Wojdel, A. & Rothkranz, L. J. M., 2001. FACS Based Generating of Facial Expressions. Proceedings of 7th annual conference of the Advanced School for Computing and Imaging, ASCI'01.
  22. Wojdel, A. & Rothkranz, L. J. M., 2005. Parametric generation of facial expressions based on FACS. Computer Graphics Forum, Volume 24, pp. 743-757.
Download


Paper Citation


in Harvard Style

Kumar D. and Vanualailai J. (2016). Low Bandwidth Video Streaming using FACS, Facial Expression and Animation Techniques . In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2016) ISBN 978-989-758-175-5, pages 226-235. DOI: 10.5220/0005718202240233


in Bibtex Style

@conference{grapp16,
author={Dinesh Kumar and Jito Vanualailai},
title={Low Bandwidth Video Streaming using FACS, Facial Expression and Animation Techniques},
booktitle={Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2016)},
year={2016},
pages={226-235},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005718202240233},
isbn={978-989-758-175-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2016)
TI - Low Bandwidth Video Streaming using FACS, Facial Expression and Animation Techniques
SN - 978-989-758-175-5
AU - Kumar D.
AU - Vanualailai J.
PY - 2016
SP - 226
EP - 235
DO - 10.5220/0005718202240233