Visual Attention in Edited Dynamical Images

Ulrich Ansorge, Shelley Buchinger, Christian Valuch, Aniello Raffaele Patrone, Otmar Scherzer

2014

Abstract

Edited (or cut) dynamical images are created by changing perspectives in imaging devices, such as videos, or graphical animations. They are abundant in everyday and working life. However little is known about how attention is steered with regard to this material. Here we propose a simple two-step architecture of gaze control for this situation. This model relies on (1) a down-weighting of repeated information contained in optic flow within takes (between cuts), and (2) an up-weighting of repeated information between takes (across cuts). This architecture is both parsimonious and realistic. We outline the evidence speaking for this architecture and also identify the outstanding questions.

References

  1. Abhau, J., Belhachmi, Z., and Scherzer, O. (2009). On a decomposition model for optical flow. In Energy Minimization Methods in Computer Vision and Pattern Recognition, pages 126-139. Springer.
  2. Abrams, R. A. and Christ, S. E. (2003). Motion onset captures attention. Psychological Science, 14(5):427- 432.
  3. Bar, M. (2007). The proactive brain: using analogies and associations to generate predictions. Trends in cognitive sciences, 11(7):280-289.
  4. Becker, S. I. and Horstmann, G. (2011). Novelty and saliency in attentional capture by unannounced motion singletons. Acta psychologica, 136(3):290-299.
  5. Böhme, M., Dorr, M., Krause, C., Martinetz, T., and Barth, E. (2006). Eye movement predictions on natural videos. Neurocomputing, 69(16-18):1996-2004.
  6. Brooks, D. I., Rasmussen, I. P., and Hollingworth, A. (2010). The nesting of search contexts within natural scenes: Evidence from contextual cuing. Journal of Experimental Psychology: Human Perception and Performance, 36(6):1406.
  7. Burnham, B. R. (2007). Displaywide visual features associated with a search displays appearance can mediate attentional capture. Psychonomic Bulletin & Review, 14(3):392-422.
  8. Carmi, R. and Itti, L. (2006). Visual causes versus correlates of attentional selection in dynamic scenes. Vision Research, 46(26):4333 - 4345.
  9. Cutting, J. E., Brunick, K. L., and Candan, A. (2012). Perceiving event dynamics and parsing hollywood films. Journal of experimental psychology: human perception and performance, 38(6):1476.
  10. Deubel, H. and Schneider, W. X. (1996). Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research, 36(12):1827 - 1837.
  11. Duncan, J. and Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological review, 96(3):433.
  12. Foulsham, T., Cheng, J. T., Tracy, J. L., Henrich, J., and Kingstone, A. (2010). Gaze allocation in a dynamic situation: Effects of social status and speaking. Cognition, 117(3):319-331.
  13. Frintrop, S., Rome, E., and Christensen, H. I. (2010). Computational visual attention systems and their cognitive foundations: A survey. ACM Transactions on Applied Perception (TAP), 7(1):6.
  14. Hasson, U., Nir, Y., Levy, I., Fuhrmann, G., and Malach, R. (2004). Intersubject synchronization of cortical activity during natural vision. science, 303(5664):1634- 1640.
  15. Itti, L. and Baldi, P. (2009). Bayesian surprise attracts human attention. Vision research, 49(10):1295-1306.
  16. Itti, L. and Koch, C. (2001). Computational modelling of visual attention. Nature Reviews Neuroscience, 2(3):194-203.
  17. Itti, L., Koch, C., Niebur, E., et al. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence, 20(11):1254-1259.
  18. Luck, S. J. and Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390(6657):279-281.
  19. Maljkovic, V. and Nakayama, K. (1994). Priming of pop-out: I. role of features. Memory & cognition, 22(6):657-672.
  20. Maxcey-Richard, A. M. and Hollingworth, A. (2013). The strategic retention of task-relevant objects in visual working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(3):760.
  21. Mital, P., Smith, T. J., Luke, S., and Henderson, J. (2013). Do low-level visual features have a causal influence on gaze during dynamic scene viewing? Journal of Vision, 13(9):144-144.
  22. Najemnik, J. and Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434(7031):387-391.
  23. Patrone, A. R. (2014). Optical flow decomposition with time regularization. In Conference on IMAGING SCIENCE.
  24. Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology, 32:3-25-.
  25. Royden, C. S., Wolfe, J. M., and Klempen, N. (2001). Visual search asymmetries in motion and optic flow fields. Perception & Psychophysics, 63(3):436-444.
  26. Rushton, S. K., Bradshaw, M. F., and Warren, P. A. (2007). The pop out of scene-relative object movement against retinal motion due to self-movement. Cognition, 105(1):237-245.
  27. Smith, T. J., Levin, D., and Cutting, J. E. (2012). A window on reality perceiving edited moving images. Current Directions in Psychological Science, 21(2):107-113.
  28. Theeuwes, J. (2010). Top-down and bottom-up control of visual selection. Acta psychologica, 135(2):77-99.
  29. Torralba, A., Oliva, A., Castelhano, M. S., and Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychological review, 113(4):766.
  30. Treisman, A. M. and Gelade, G. (1980). A featureintegration theory of attention. Cognitive psychology, 12(1):97-136.
  31. Valuch, C., Ansorge, U., Buchinger, S., Patrone, A. R., and Scherzer, O. (2014). The effect of cinematic cuts on human attention. In TVX, pages 119-122.
  32. Valuch, C., Becker, S. I., and Ansorge, U. (2013). Priming of fixations during recognition of natural scenes. J Vis, 13(3).
  33. Wang, H. X., Freeman, J., Merriam, E. P., Hasson, U., and Heeger, D. J. (2012). Temporal eye movement strategies during naturalistic viewing. Journal of vision, 12(1):16.
  34. Wolfe, J. M. (1994). Guided search 2.0 a revised model of visual search. Psychonomic bulletin & review, 1(2):202-238.
  35. Zelinsky, G. J. (2008). A theory of eye movements during target acquisition. Psychological review, 115(4):787.
Download


Paper Citation


in Harvard Style

Ansorge U., Buchinger S., Valuch C., Patrone A. and Scherzer O. (2014). Visual Attention in Edited Dynamical Images . In Proceedings of the 11th International Conference on Signal Processing and Multimedia Applications - Volume 1: SIGMAP, (ICETE 2014) ISBN 978-989-758-046-8, pages 198-205. DOI: 10.5220/0005101901980205


in Bibtex Style

@conference{sigmap14,
author={Ulrich Ansorge and Shelley Buchinger and Christian Valuch and Aniello Raffaele Patrone and Otmar Scherzer},
title={Visual Attention in Edited Dynamical Images},
booktitle={Proceedings of the 11th International Conference on Signal Processing and Multimedia Applications - Volume 1: SIGMAP, (ICETE 2014)},
year={2014},
pages={198-205},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005101901980205},
isbn={978-989-758-046-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 11th International Conference on Signal Processing and Multimedia Applications - Volume 1: SIGMAP, (ICETE 2014)
TI - Visual Attention in Edited Dynamical Images
SN - 978-989-758-046-8
AU - Ansorge U.
AU - Buchinger S.
AU - Valuch C.
AU - Patrone A.
AU - Scherzer O.
PY - 2014
SP - 198
EP - 205
DO - 10.5220/0005101901980205