Using Motion Blur to Recognize Hand Gestures in Low-light Scenes

Daisuke Sugimura, Yusuke Yasukawa, Takayuki Hamamoto

2016

Abstract

We propose a method for recognizing hand gestures in low-light scenes. In such scenes, hand gesture images are significantly deteriorated because of heavy noise; therefore, previous methods may not work well. In this study, we exploit a single color image constructed by temporally integrating a hand gesture sequence. In general, the temporal integration of images improves the signal-to-noise (S/N) ratio; it enables us to capture sufficient appearance information of the hand gesture sequence. The key idea of this study is to exploit a motion blur, which is produced when integrating a hand gesture sequence temporally. The direction and the magnitude of motion blur are discriminative characteristics that can be used for differentiating hand gestures. In order to extract these features of motion blur, we analyze the gradient intensity and the color distributions of a single motion-blurred image. In particular, we encode such image features to self-similarity maps, which capture pairwise statistics of spatially localized features within a single image. The use of self-similarity maps allows us to represent invariant characteristics to the individual variations in the same hand gestures. Using self-similarity maps, we construct a classifier for hand gesture recognition. Our experiments demonstrate the effectiveness of the proposed method.

References

  1. Bregonzio, M., Gong, S., and Xiang, T. (2009). Recognizing action as clouds of space-time interest points. In Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pages 1948-1955.
  2. Cipolla, R., Okamoto, Y., and Kuno, Y. (1993). Robust structure from motion using motion parallax. In Proc. IEEE Int. Conf. Computer Vision, pages 374-382.
  3. Freeman, W. T. and Roth, M. (1995). Orientation histograms for hand gesture recognition. In Proc. IEEE Int. Workshop. Automated Face and Gesture Recognition, pages 296-301.
  4. Freeman, W. T. and Weissman, C. D. (1996). Television control by hand gestures. In Proc. IEEE Int. Workshop. Automated Face and Gesture Recognition, pages 179-183.
  5. Ikizler-Cinbis, N. and Sclaroff, S. (2010). Object, scene and actions: Combining multiple features for human action recognition. In Proc. European Conf. Computer Vision, pages 494-507.
  6. Iwai, Y., Watanabe, K., Yagi, Y., and Yachida, M. (1996). Gesture recognition using colored gloves. In Proc. Int. Conf. Pattern Recognition, pages 662-666.
  7. J.Davis and M, S. (1994). Recognizing hand gestures. In Proc. European Conf. Computer Vision, pages 331- 340.
  8. Kim, T.-K., Wong, S.-F., and Cipolla, R. (2007). Tensor canonical correlation analysis for action classification. In Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pages 1-8.
  9. Lian, S., Hu, H. W., and Wang, K. (2014). Automatic user state recognition for hand gesture based low-cost television control system. IEEE Trans. Consumer Electronics, 60:107-115.
  10. Lin, H. T., Tai, Y. W., and Brown, M. S. (2011). Motion regularization for matting motion blurred objects. IEEE Trans. Pattern Analysis and Machine Intelligence, 33:2329-2336.
  11. Liu, L. and Shao, L. (2013). Synthesis of spatio-temporal descriptors for dynamic hand gesture recognition using genetic programming. In Proc. IEEE Conf. Automatic Face and Gesture Recognition, pages 1-7.
  12. Lucas, B. and Kanade, T. (1981). An iterative image registration technique with an application in stereo vision. In Proc. Int. Joint Conf. Artificial Intelligence, pages 674-679.
  13. Marin, G., Dominio, F., and Zanuttigh, P. (2014). Hand gesture recognition with leap motion and kinect devices. In Proc. IEEE Int. Conf. Image Processing, pages 1565-1569.
  14. Niebles, J., Wang, H., and Fei-Fei, L. (2008). Unsupervised learning of human action categories using spatio-temporal words. Int. Journal of Computer Vision, 79:299-318.
  15. Pavlovic, V. I., Sharma, R., and Huang, T. S. (1997). Visual interpretation of hand gestures for human-computer interaction: Review. IEEE Trans. Pattern Analysis and Machine Intelligence, 19(7):677-695.
  16. Pfister, T., Charles, J., and Zisserman, A. (2014). Domainadaptive discriminative one-shot learning of gestures. In Proc. European Conf. Computer Vision, pages 814- 829.
  17. Ren, Z., Yuan, J., Meng, J., and Zhang, Z. (2013). Robust part based hand gesture recognition using kinect sensor. IEEE Trans. Multimedia, 15:1110-1120.
  18. Scocanner, P., Ali, S., and Shah, M. (2007). A 3- dimensional sift descriptor and its application to action recognition. In Proc. ACM Int. Conf. Multimedia, pages 357-360.
  19. Shen, X., Lin, Z., Brandt, J., and Wu, Y. (2012). Dynamic hand gesture recognition: An exemplar-based approach from motion divergence fields. Image and Vision Computing, 30:227-235.
  20. Stamer, T. and Pentland, A. (1995). Real-time american sign language recognition from video using hidden markov models. Technical Report TR-375, Media Lab., MIT.
  21. Tang, D., Chang, H. J., Tejani, A., and Kim, T. K. (2014). Latent regression forest: Structured estimation of 3d articulated hand posture. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pages 3786- 3793.
  22. Wachs, J. P., Kolsch, M., Stern, H., and Edan, Y. (2011). Vision-based hand-gesture applications. Communications of the ACM, 54:60-71.
  23. Walk, S., Majer, N., Schindler, K., and Schiele, B. (2010). New features and insights for pedestrian detection. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pages 1030-1037.
  24. Weaver, J., Starner, T., and Pentland, A. (1998). Real-time american sign language recognition using desk and wearable computer based video. IEEE Trans. Pattern Analysis and Machine Intelligence, 33:1371-1376.
  25. Yamato, J., Ohya, J., and Ishii, K. (1992). Recognizing human action in time-sequential images using hidden markov model. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pages 379-385.
Download


Paper Citation


in Harvard Style

Sugimura D., Yasukawa Y. and Hamamoto T. (2016). Using Motion Blur to Recognize Hand Gestures in Low-light Scenes . In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016) ISBN 978-989-758-175-5, pages 308-316. DOI: 10.5220/0005673603080316


in Bibtex Style

@conference{visapp16,
author={Daisuke Sugimura and Yusuke Yasukawa and Takayuki Hamamoto},
title={Using Motion Blur to Recognize Hand Gestures in Low-light Scenes},
booktitle={Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)},
year={2016},
pages={308-316},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005673603080316},
isbn={978-989-758-175-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)
TI - Using Motion Blur to Recognize Hand Gestures in Low-light Scenes
SN - 978-989-758-175-5
AU - Sugimura D.
AU - Yasukawa Y.
AU - Hamamoto T.
PY - 2016
SP - 308
EP - 316
DO - 10.5220/0005673603080316