VIRTUAL REALITY AND AFFECTIVE COMPUTING TECHNIQUES FOR FACE-TO-FACE COMMUNICATION

Hamzi Hamza, Paul Richard, Aymeric Suteau, Mehdi Saleh

Abstract

We present a multi-modal affective virtual environment (VE) for job interview training. The proposed platform aims to support real-time emotion-based simulations between an ECA and a human. The first goal is to train candidates (students, job hunters, etc.) to better master their emotional states and behavioral skills. The users’ emotional and behavior states will be assessed using different human-machine interfaces and biofeedback sensors. Collected data will be processed in real-time by a behavioral engine. A preliminary experiment was carried out to analyze the correspondence between the users’ perceived emotional states and the collected data. Participants were instructed to look at a series of sixty IAPS pictures and rate each picture on the following dimensions : joy, anger, surprise, disgust, fear and sadness.

References

  1. Bradley, M. M., Codispoti, M., Cuthbert, B. N., and Lang, P. J. (2001). Emotion and motivation I: Defensive and appetitive reactions in picture processing. Emotion, 1(3):276-298.
  2. Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C., Kazemzadeh, A., Lee, S., Neumann, U., and Narayanan, S. (2004). Analysis of emotion recognition using facial expressions, speech and multimodal information. In ICMI 7804: Proceedings of the 6th international conference on Multimodal interfaces, pages 205-211, New York, NY, USA. ACM.
  3. Calvo, R. A. and D'Mello, S. (2010). Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transaction on Affective Computing, 1:18-37.
  4. Darwin, C. (1872). The expression of emotion in man and animal. University of Chicago Press (reprinted in 1965), Chicago.
  5. Ekman, P. (1999). Basic emotions, pages 301-320. Sussex U.K.: John Wiley and Sons, Ltd, New York.
  6. Ekman, P. and Friesen, W. V. (1978). Facial Action Coding System: A Technique for Measurement of Facial Movement. Consulting Psychologists Press Palo Alto, California.
  7. Hammal, Z. and Massot, C. (2010). Holistic and featurebased information towards dynamic multi-expressions recognition. In VISAPP 2010. International Conference on Computer Vision Theory and Applications, volume 2, pages 300-309.
  8. Healey, J. and Picard, R. W. (2000). Smartcar: Detecting driver stress. In In Proceedings of ICPR'00, pages 218-221, Barcelona, Spain.
  9. Helmut, P., Junichiro, M., and Mitsuru, I. (2005). Recognizing, modeling, and responding to users' affective states. In User Modeling, pages 60-69.
  10. Lang, P. J., Bradley, M. M., and Cuthbert, B. N. (1999). International affective picture system (iaps). Technical manual and affective ratings.
  11. Lisetti, C. and Nasoz, F. (2004). Using noninvasive wearable computers to recognize human emotions from physiological signals. EURASIP J. Appl. Signal Process, 2004:1672-1687.
  12. Mehrabian, A. (1996). Pleasure-Arousal-Dominance: A General Framework for Describing and Measuring Individual Differences in Temperament. Current Psychology, 14(4):261-292.
  13. Paleari, M. and Lisetti, C. L. (2006). Toward multimodal fusion of affective cues. In Proceedings of the 1st ACM international workshop on Human-Centered Multimedia, pages 99-108, New York, NY, USA. ACM.
  14. Pantic, M. and Rothkrantz, L. (2003). Toward an affect-sensitive multimodal human-computer interaction. volume 91, pages 1370-1390. Proceedings of the IEEE.
  15. Picard, R., Vyzas, E., and Healey, J. (2001). Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10):1175- 1191.
  16. Roy, D. and Pentland, A. (1996). Automatic spoken affect classification and analysis. automatic face and gesture recognition. In Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG 7896), pages 363-367, Washington, DC, USA. IEEE Computer Society.
  17. Scherer, K. R. (2003). Vocal communication of emotion: A review of research paradigms. Speech Communication, 40(7-8):227-256.
  18. Sebe, N., Cohen, I., and Huang, T. (2005). Multimodal Emotion Recognition. World Scientific.
  19. Tian, Y., Kanade, T., and Cohn, J. (2000). Recognizing lower face action units for facial expression analysis. pages 484-490. Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG'00).
  20. Villon, O. (2007). Modeling affective evaluation of multimedia contents: user models to associate subjective experience, physiological expression and contents description. PhD thesis, Thesis.
  21. Woolf, B., Burleson, W., Arroyo, I., Dragon, T., Cooper, D., and Picard, R. (2009). Affect-aware tutors: recognising and responding to student affect. Int. J. Learn. Technol., 4(3/4):129-164.
Download


Paper Citation


in Harvard Style

Hamza H., Richard P., Suteau A. and Saleh M. (2011). VIRTUAL REALITY AND AFFECTIVE COMPUTING TECHNIQUES FOR FACE-TO-FACE COMMUNICATION . In Proceedings of the International Conference on Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2011) ISBN 978-989-8425-45-4, pages 357-360. DOI: 10.5220/0003377203570360


in Bibtex Style

@conference{grapp11,
author={Hamzi Hamza and Paul Richard and Aymeric Suteau and Mehdi Saleh},
title={VIRTUAL REALITY AND AFFECTIVE COMPUTING TECHNIQUES FOR FACE-TO-FACE COMMUNICATION},
booktitle={Proceedings of the International Conference on Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2011)},
year={2011},
pages={357-360},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003377203570360},
isbn={978-989-8425-45-4},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2011)
TI - VIRTUAL REALITY AND AFFECTIVE COMPUTING TECHNIQUES FOR FACE-TO-FACE COMMUNICATION
SN - 978-989-8425-45-4
AU - Hamza H.
AU - Richard P.
AU - Suteau A.
AU - Saleh M.
PY - 2011
SP - 357
EP - 360
DO - 10.5220/0003377203570360