Saliency Sandbox - Bottom-up Saliency Framework

David Geisler, Wolfgang Fuhl, Thiago Santini, Enkelejda Kasneci

2017

Abstract

Saliency maps are used to predict the visual stimulus raised from a certain region in a scene. Most approaches to calculate the saliency in a scene can be divided into three consecutive steps: extraction of feature maps, calculation of activation maps, and the combination of activation maps. In the past two decades, several new saliency estimation approaches have emerged. However, most of these approaches are not freely available as source code, thus requiring researchers and application developers to reimplement them. Moreover, others are freely available but use different platforms for their implementation. As a result, employing, evaluating, and combining existing approaches is time consuming, costly, and even error-prone (e.g., when reimplementation is required). In this paper, we introduce the Saliency Sandbox, a framework for the fast implementation and prototyping of saliency maps, which employs a flexible architecture that allows designing new saliency maps by combining existing and new approaches such as Itti & Koch, GBVS, Boolean Maps and many more. The Saliency Sandbox comes with a large set of implemented feature extractors as well as some of the most popular activation approaches. The framework core is written in C++; nonetheless, interfaces for Matlab and Simulink allow for fast prototyping and integration of already existing implementations. Our source code is available at: www.ti.uni-tuebingen.de/perception.

References

  1. Achanta, R., Hemami, S., Estrada, F., and Susstrunk, S. (2009). Frequency-tuned salient region detection. In IEEE conference on Computer vision and pattern recognition, 2009. CVPR 2009., pages 1597-1604.
  2. Braunagel, C., Geisler, D., Stolzmann, W., Rosenstiel, W., and Kasneci, E. (2016). On the necessity of adaptive eye movement classification in conditionally automated driving scenarios. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, pages 19-26. ACM.
  3. Derrington, A. M., Krauskopf, J., and Lennie, P. (1984). Chromatic mechanisms in lateral geniculate nucleus of macaque. The Journal of Physiology, 357(1):241- 265.
  4. Godbehere, A. B., Matsukawa, A., and Goldberg, K. (2012). Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. In 2012 American Control Conference (ACC), pages 4305-4312. IEEE.
  5. Harel, J., Koch, C., and Perona, P. (2006a). Graph-based visual saliency. In Advances in neural information processing systems, pages 545-552.
  6. Harel, J., Koch, C., and Perona, P. (2006b). A saliency implementation in matlab. URL: http://www. klab. caltech. edu/˜ harel/share/gbvs. php .
  7. Hou, X. and Zhang, L. (2007). Saliency detection: A spectral residual approach. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8. IEEE.
  8. Itti, L. (2004). The iLab Neuromorphic Vision C++ Toolkit: Free tools for the next generation of vision algorithms. The Neuromorphic Engineer, 1(1):10.
  9. Itti, L. and Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision research, 40(10):1489-1506.
  10. ITU (2011). Studio encoding parameters of digital television for standard 4:3 and wide screen 16:9 aspect ratios.
  11. KaewTraKulPong, P. and Bowden, R. (2002). An improved adaptive background mixture model for realtime tracking with shadow detection. In Video-Based Surveillance Systems, pages 135-144. Springer.
  12. Kasneci, E., Kasneci, G., Kübler, T. C., and Rosenstiel, W. (2015). Online recognition of fixations, saccades, and smooth pursuits for automated analysis of traffic hazard perception. In Artificial Neural Networks , pages 411-434. Springer.
  13. Kübler, T. C., Kasneci, E., and Rosenstiel, W. (2014). Subsmatch: Scanpath similarity in dynamic scenes based on subsequence frequencies. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 319-322. ACM.
  14. Patrone, A. R., Valuch, C., Ansorge, U., and Scherzer, O. (2016). Dynamical optical flow of saliency maps for predicting visual attention. arXiv preprint arXiv:1606.07324.
  15. Schauerte, B. and Stiefelhagen, R. (2012). Quaternionbased spectral saliency detection for eye fixation prediction. In Computer Vision-ECCV 2012 , pages 116- 129. Springer.
  16. Tafaj, E., Kasneci, G., Rosenstiel, W., and Bogdan, M. (2012). Bayesian online clustering of eye movement data. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA 7812, pages 285-288. ACM.
  17. Treisman, A. M. and Gelade, G. (1980). A featureintegration theory of attention. Cognitive psychology, 12(1):97-136.
  18. Von Goethe, J. W. (1840). Theory of colours, volume 3. MIT Press.
  19. Walther, D. and Koch, C. (2006). Modeling attention to salient proto-objects. Neural networks, 19(9):1395- 1407.
  20. Zhang, J. and Sclaroff, S. (2013). Saliency detection: A boolean map approach. In Proceedings of the IEEE International Conference on Computer Vision, pages 153-160.
  21. Zivkovic, Z. (2004). Improved adaptive gaussian mixture model for background subtraction. In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, volume 2, pages 28-31.
Download


Paper Citation


in Harvard Style

Geisler D., Fuhl W., Santini T. and Kasneci E. (2017). Saliency Sandbox - Bottom-up Saliency Framework . In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017) ISBN 978-989-758-225-7, pages 657-664. DOI: 10.5220/0006272306570664


in Bibtex Style

@conference{visapp17,
author={David Geisler and Wolfgang Fuhl and Thiago Santini and Enkelejda Kasneci},
title={Saliency Sandbox - Bottom-up Saliency Framework},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)},
year={2017},
pages={657-664},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006272306570664},
isbn={978-989-758-225-7},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)
TI - Saliency Sandbox - Bottom-up Saliency Framework
SN - 978-989-758-225-7
AU - Geisler D.
AU - Fuhl W.
AU - Santini T.
AU - Kasneci E.
PY - 2017
SP - 657
EP - 664
DO - 10.5220/0006272306570664