FACIAL POSE AND ACTION TRACKING USING SIFT

B. H. Pawan Prasad, R. Aravind

2011

Abstract

In this paper, a robust method to estimate the head pose and facial actions in uncalibrated monocular video sequences is described. We do not assume the knowledge of the camera parameters unlike most other methods. The face is modelled in 3D using the Candide-3 face model. A simple graphical user interface is designed to initialize the tracking algorithm. Tracking of facial feature points is achieved using a novel SIFT-based point tracking algorithm. The head pose is estimated using the POSIT algorithm in a RANSAC framework. The animation parameter vector is computed in an optimization procedure. The proposed algorithm is tested on two standard data sets. The qualitative and quantitative analysis is similar to the analysis of competing methods reported in literature. Experimental results validates that, the proposed system accurately estimates the pose and the facial actions. The proposed system can also be used for facial expression classification and facial animation.

References

  1. Aggarwal, G., Veeraraghavan, A., and Chellappa, R. (2005). 3d Facial pose tracking in Uncalibrated videos. PRMI, pages 515-520.
  2. Ahlberg, J. (2001). Candide-3-an updated parametrized face. Report No. LiTH-ISY.
  3. Bradley, C. (2007). The Algebra of Geometry: Cartesian, Areal and Projective Co-ordinates. Highperception Ltd., Bath.
  4. Brox, T., Rosenhahn, B., Gall, J., and Cremers, D. (2010). Combined region and motion-based 3D tracking of rigid and articulated objects. PAMI, 32(3):402.
  5. Choi, S. and Kim, D. (2008). Robust head tracking using 3D ellipsoidal head model in particle filter. Pattern Recognition, 41(9):2901-2915.
  6. De Berg, M., Cheong, O., Van Kreveld, M., and Overmars, M. (2008). Computational geometry: Algorithms and applications. Springer.
  7. DeMenthon, D. and Davis, L. (1995). Model-based object pose in 25 lines of code. IJCV, 15(1):123-141.
  8. Dornaika, F. and Ahlberg, J. (2004). Face and facial feature tracking using deformable models. IJIG, 4(3):499.
  9. Dornaika, F. and Ahlberg, J. (2006). Fitting 3D face models for tracking and active appearance model training. Image and Vision Computing, 24(9):1010-1024.
  10. Edelsbrunner, H. (2001). Geometry and topology for mesh generation. Cambridge Univ. Press.
  11. Ekman, P. and Friesen, W. (1977). Facial Action Coding System. Consulting Psychology Press.
  12. Fischler, M. and Bolles, R. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381-395.
  13. Group, M. U. (2007). The MUG Facial Expression Database. http://mug.ee.auth.gr/fed/.
  14. Jang, J. and Kanade, T. (2008). Robust 3D head tracking by online feature registration. In 8th IEEE International Conference on Automatic Face and Gesture Recognition.
  15. La Cascia, M., Sclaroff, S., and Athitsos, V. (2000). Fast, reliable head tracking under varying illumination: an approach based on registration of texture-mapped 3 D models. PAMI, 22(4):322-336.
  16. Levenberg, K. (1944). A method for the solution of certain nonlinear problems in least-squares. The Quarterly of Applied Mathematics, 2:164-168.
  17. Lowe, D. (2004). Distinctive image features from scaleinvariant keypoints. IJCV, 60(2):91-110.
  18. Maronna, R., Martin, R., and Yohai, V. (2006). Robust statistics. Wiley New York.
  19. Marquardt, D. (1970). Generalized inverses, ridge regression, biased linear estimation, and nonlinear estimation. Technometrics, 12(3):591-612.
  20. Pawan, P. and Aravind, R. (2010). A Robust Head Pose Estimation System in Uncalibrated Monocular Videos. In Indian Conference on Computer Vision Graphics and Image Processing. ACM.
  21. Terissi, L., Gómez, J., CIFASIS, C., and Rosario, A. (2010). 3D Head Pose and Facial Expression Tracking using a Single Camera. Journal of Universal Computer Science, 16(6):903-920.
  22. Vatahska, T., Bennewitz, M., and Behnke, S. (2009). Feature-based head pose estimation from images. In 7th IEEE-RAS International Conference on Humanoid Robots, pages 330-335. IEEE.
  23. Xiao, J., Moriyama, T., Kanade, T., and Cohn, J. (2003). Robust full-motion recovery of head by dynamic templates and re-registration techniques. International Journal of Imaging Systems and Technology, 13(1):85-94.
Download


Paper Citation


in Harvard Style

H. Pawan Prasad B. and Aravind R. (2011). FACIAL POSE AND ACTION TRACKING USING SIFT . In Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2011) ISBN 978-989-8425-47-8, pages 614-619. DOI: 10.5220/0003362606140619


in Bibtex Style

@conference{visapp11,
author={B. H. Pawan Prasad and R. Aravind},
title={FACIAL POSE AND ACTION TRACKING USING SIFT},
booktitle={Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2011)},
year={2011},
pages={614-619},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003362606140619},
isbn={978-989-8425-47-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2011)
TI - FACIAL POSE AND ACTION TRACKING USING SIFT
SN - 978-989-8425-47-8
AU - H. Pawan Prasad B.
AU - Aravind R.
PY - 2011
SP - 614
EP - 619
DO - 10.5220/0003362606140619