Pose Recognition in Indoor Environments using a Fisheye Camera and a Parametric Human Model

K. K. Delibasis, V. P. Plagianakos, I. Maglogiannis

2014

Abstract

In this paper we present a system that uses computer vision techniques and a deformable 3D human model, in order to recognize the posture of a monitored person, given the segmented human silhouette from the background. The video data are acquired indoors from a fixed fish-eye camera placed in the living environment. The implemented 3D human model collaborates with a fish-eye camera model, allowing the calculation of the real human position in the 3D-space and consequently recognizing the posture of the monitored person. The paper discusses the details of the human model and fish-eye camera model, as well as the posture recognition methodology. Initial results are also presented for a small number of video sequences, of walking or standing humans.

References

  1. Willems J., Debard G., Bonroy B., Vanrumste B. and Goedemé T., “How to detect human fall in video? An overview”, In Proceedings of the positioning and contex-awareness international conference (Antwerp, Belgium, 28 May, 2009), POCA 7809.
  2. Cucchiara, R., Grana, C., Piccardi, M., and Prati A. 2003. Detecting moving objects, ghosts, and shadows in video streams. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 10, (2003), 1337-1442.
  3. McFarlane, N. and Schofield C., “Segmentation and tracking of piglets in images”, MACH VISION APPL. 8, 3, (May. 1995), 187-193.
  4. Wren, C., Azarhayejani, A., Darrell, T., and Pentland, A. P. 1997. Pfinder: real-time tracking of the human body, IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 7, (October. 1997), 780-785.
  5. Stauffer C., and Grimson W., “Adaptive background mixture models for real-time tracking”. In Proceedings of the conference on computer vision and pattern recognition (Ft. Collins, USA, June 23-25, 1999), CVPR 7899. IEEE Computer Society, New York, NY, pp. 246-252.
  6. Cheng F. C., Huang S. C. and Ruan S. J. 2011, Implementation of Illumination-Sensitive Background Modeling Approach for Accurate Moving Object Detection, IEEE Trans. on Boardcasting, vol. 57, no. 4, pp.794-801, 2011.
  7. Christodoulidis A., Delibasis K., Maglogiannis I., “Near real-time human silhouette and movement detection in indoor environments using fixed cameras”, in The 5th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Heraklion, Crete, Greece, 2012.
  8. Delamarre, Q., Faugeras, O., “3D articulated models andmultiview tracking with physical forces”, Computer Vision and ImageUnderstanding (CVIU) 81 (3) (2001) 328-357.
  9. Kehl, R., Van Gool, L., “Markerless tracking of complex human motions from multiple views”, Computer Vision and Image Understanding (CVIU) 104 (2-3) (2006) 190-209.
  10. Barron, C., Kakadiaris, I., “Estimating anthropometryand pose from a single uncalibrated image”, Computer Vision andImage Understanding (CVIU) 81 (3) (2001) 269-284.
  11. Bregler, C., Malik, J., Pullen, K., “Twist basedacquisition and tracking of animal and human kinematics”, International Journal of Computer Vision 56 (3) (2004) 179-194.
  12. Taylor, C., Reconstruction of articulated objects from point correspondences in a single uncalibrated image, Computer Vision and Image Understanding (CVIU) 80 (3) (2000) 349-363.
  13. Liebowitz, D., Carlsson, S., “Uncalibrated motion captureexploiting articulated structure constraints”, International Journal of Computer Vision 51 (3) (2003) 171-187.
  14. Poppe, R., “Vision-based human motion analysis: An overview”, Computer Vision and Image Understanding, 108 (2007) 4-18.
  15. Kemmotsu, K., Tomonaka, T., Shiotani, S., Koketsu, Y., and Iehara, M., "Recognizing human behaviors with vision sensors in a Network Robot System," IEEE Int. Conf on Robotics and Automation, pp.l274-1279, 2006.
  16. Zhou, Z., Chen, X., Chung, Y., He, Z., Han, T. X. and Keller, J., "Activity Analysis, Summarization and Visualization for Indoor Human Activity Monitoring," IEEE Trans. on Circuit and systems for Video Technology, Vol. 18, No. II, pp. 1489-1498,2008.
  17. M. Saito and K. Kitaguchi, G. Kimura and M. Hashimoto,“Human Detection from Fish-eye Image by Bayesian Combination of Probabilistic Appearance Models”, IEEE International Conference on Systems Man and Cybernetics (SMC), 2010, pp243-248.
  18. Li H. and Hartley R., “Plane-Based Calibration and Autocalibration of a Fish-Eye” Camera, P.J. Narayanan et al. (Eds.): ACCV 2006, LNCS 3851, pp. 21-30, 2006, c Springer-Verlag Berlin Heidelberg 2006.
  19. Basu A., Licardie S., “Modeling fish-eye lenses”, Proceedings of the 1993 IEEWSJ International Conference on Intelligent Robots and Systems Yokohama, Japan July 2630,1993.
  20. Shah S. and Aggarwal J., “Intrinsic parameter calibration procedure for a high distortion fish-eye lens camera with distortion model and accuracy estimation”, Pattern Recognition 29(11), 1775- 1788, 1996.
  21. Delibasis K. K., Goudas T., Plagianakos V. P. and Maglogiannis I., Fisheye Camera Modeling for Human Segmentation Refinement in Indoor Videos, in The 6th ACM International Conference on PErvasive Technologies Related to Assistive Environments, PETRA 2013.
  22. Max, N., “Computer Graphics Distortion for IMAX and OMNIMAX Projection”, Proc Nicograph 83, Dec 1983 pp 137.
  23. Greene, N., “Environment Mapping and Other Applications of World Projections”, IEEE Computer Graphics and Applications, November 1986, vol. 6(11), pp 21.
  24. Micusik, B. and Pajdla, T., “Structure from Motion with Wide Circular Field of View Cameras”, IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI 28(7), 2006, pp. 1-15.
  25. http://www.3dmodelfree.com/models/20966-0.htm.
  26. Goldberg D., “Genetic Algorithms in Search, Optimization, and Machine Learning”, Addison Wesley, 1989.
Download


Paper Citation


in Harvard Style

Delibasis K., Plagianakos V. and Maglogiannis I. (2014). Pose Recognition in Indoor Environments using a Fisheye Camera and a Parametric Human Model . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014) ISBN 978-989-758-004-8, pages 470-477. DOI: 10.5220/0004693704700477


in Bibtex Style

@conference{visapp14,
author={K. K. Delibasis and V. P. Plagianakos and I. Maglogiannis},
title={Pose Recognition in Indoor Environments using a Fisheye Camera and a Parametric Human Model},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014)},
year={2014},
pages={470-477},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004693704700477},
isbn={978-989-758-004-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014)
TI - Pose Recognition in Indoor Environments using a Fisheye Camera and a Parametric Human Model
SN - 978-989-758-004-8
AU - Delibasis K.
AU - Plagianakos V.
AU - Maglogiannis I.
PY - 2014
SP - 470
EP - 477
DO - 10.5220/0004693704700477