Implementation of an Intentional Vision System to Support Cognitive Architectures

Ignazio Infantino, Carmelo Lodato, Salvatore Lopes, Filippo Vella

2008

Abstract

An effective cognitive architecture has to be able to model, recognize and interpret user wills. The aim of the proposed framework is the development of an intentional vision system oriented to man-machine interaction. Such system will be able to recognize user faces, to recognize and tracking human postures by video cameras. It can be integrated in cognitive software architecture, and could be tested in several demonstrative scenarios such as domotics, or entrainment robotics, and so on. The described framework is organized on two modules mapped on the corresponding outputs to obtain: intentional perception of faces; intentional perception of human body movements. Moreover a possible integration of intentional vision module in a complete cognitive architecture is proposed.

References

  1. Bauckhage, C., Hanheide, M., et al., (2004), “A cognitive vision system for action recognition in office environments”, proc of. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2004), vol. 2, pp. 827-833.
  2. Berg, T.L., Berg, A.C., et al., (2004), “Name and faces in the news”, proc of. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2004), vol. 2, pp. 848-854.
  3. Chella, A., Dindo, H., and Infantino, I., (2006), “People Tracking and Posture Recognition for Human-Robot Interaction”, in proc. of International Workshop on Vision Based HumanRobot Interaction, EUROS-2006.
  4. Chella, A., and Infantino, I., (2004), “Emotions in a Cognitive Architecture for Human Robot Interactions”, the 2004 AAAI Spring Symposium Series, March 2004, Stanford, California.
  5. Ekman, P., (1992), “An argument for basic emotions”, in Cognition and Emotion, vol. 6, no. 3-4, pp.169-200.
  6. Ekman, P., and Friesen, W.V., (1978), Manual for the Facial Action Coding System, Consulting Psychologists Press, Inc.
  7. Fasel, B. and Luettin, J., (2003), “Automatic Facial Expression Analysis: A Survey”, Pattern Recognition, vol. 36, no 1, pp.259-275.
  8. Kuno, Y., Ishiyama, T., et al., (1999), “Combining observations of intentional and unintentional behaviors for human-computer interaction”, in proc. of the SIGCHI conference on Human factors in computing systems, Pittsburgh, Pennsylvania, USA, pp. 238-245.
  9. Moeslund, T.B., and Granum, E., (2001), “A survey of computer vision-based human motion capture”, Computer Vision and Image Understanding, vol. 18, pp. 231-268.
  10. Phillips, P.J., Flynn, et al., (2005), “Overview of the face recognition grand challenge”, in proc. of Computer Vision and Pattern Recognition, 2005 (CVPR 2005), pp. 947-954.
  11. Rao, R.P.N, Shon, A.P., and Meltzoff, A.N., (2007) “Imitation and Social Learning in Robots, Humans and Animals”, in “Imitation and Social Learning in Robots, Humans and Animals”, Cambridge Press, pp. 217-248.
  12. Starner, T., and Pentland, (1995), “A. Visual recognition of American Sign Language using Hidden Markov Models”, in proc. of International Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland, pp. 189-194.
  13. Tian, Y.L., Kanade, T., and Cohn, J.F, (2001), “Recognizing action units for facial expression analysis”, IEEE Trans. on Pattern Analysis and Mach. Intell., vol. 23, no. 2, pp. 97-115.
  14. Turk, M., (2004), “Computer Vision in the Interface”, Comm. of the ACM, vol. 47, no 1.
  15. Turk, M., and Pentland, A., (1991), “Face recognition using Eigenfaces”, in proc. of Computer Vision and Pattern Recognition 1991, pp.586-591.
  16. Viola, P., and Jones, M. J., (2004), “Robust Real-Time Face Detection”, International Journal of Computer Vision, vol. 57, no 2, pp. 137-154.
  17. Wren, C., Azarbayejani, A., et al., (1997) , “Pfinder: Real-time tracking of the human body”, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780- 785.
  18. Zhao, W., Chellappa, R., et al., (2003), “Face recognition: A literature survey”, ACM Computing Surveys (CSUR), vol. 35, no. 4, pp 399-458.
  19. Zhou, S.K., Chellappa, R., Moghaddam, B., (2004), “Visual Tracking and Recognition Using Appearance-Adaptive Models in Particle Filters”, IEEE Trans. On Image Processing, vol. 13, no. 11, pp. 1491-1506.
Download


Paper Citation


in Harvard Style

Infantino I., Lodato C., Lopes S. and Vella F. (2008). Implementation of an Intentional Vision System to Support Cognitive Architectures . In VISAPP-Robotic Perception - Volume 1: VISAPP-RoboPerc, (VISIGRAPP 2008) ISBN 978-989-8111-23-4, pages 53-62. DOI: 10.5220/0002341100530062


in Bibtex Style

@conference{visapp-roboperc08,
author={Ignazio Infantino and Carmelo Lodato and Salvatore Lopes and Filippo Vella},
title={Implementation of an Intentional Vision System to Support Cognitive Architectures},
booktitle={VISAPP-Robotic Perception - Volume 1: VISAPP-RoboPerc, (VISIGRAPP 2008)},
year={2008},
pages={53-62},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0002341100530062},
isbn={978-989-8111-23-4},
}


in EndNote Style

TY - CONF
JO - VISAPP-Robotic Perception - Volume 1: VISAPP-RoboPerc, (VISIGRAPP 2008)
TI - Implementation of an Intentional Vision System to Support Cognitive Architectures
SN - 978-989-8111-23-4
AU - Infantino I.
AU - Lodato C.
AU - Lopes S.
AU - Vella F.
PY - 2008
SP - 53
EP - 62
DO - 10.5220/0002341100530062