Change Detection in Crowded Underwater Scenes - Via an Extended Gaussian Switch Model Combined with a Flux Tensor Pre-segmentation

Martin Radolko, Fahimeh Farhadifard, Uwe von Lukas

Abstract

In this paper a new approach for change detection in videos of crowded scenes is proposed with the extended Gaussian Switch Model in combination with a Flux Tensor pre-segmentation. The extended Gaussian Switch Model enhances the previous method by combining it with the idea of the Mixture of Gaussian approach and an intelligent update scheme which made it possible to create more accurate background models even for difficult scenes. Furthermore, a foreground model was integrated and could deliver valuable information in the segmentation process. To deal with very crowded areas in the scene – where the background is not visible most of the time – we use the Flux Tensor to create a first coarse segmentation of the current frame and only update areas that are almost motionless and therefore with high certainty should be classified as background. To ensure the spatial coherence of the final segmentations, the N2Cut approach is added as a spatial model after the background subtraction step. The evaluation was done on an underwater change detection datasets and showed significant improvements over previous methods, especially in the crowded scenes.

References

  1. Barnich, O. and Droogenbroeck, M. V. (2011). Vibe: A universal background subtraction algorithm for video sequences. IEEE Transactions on Image Processing, 20(6):1709-1724.
  2. Benfold, B. and Reid, I. (2011). Stable multi-target tracking in real-time surveillance video. In CVPR, pages 3457- 3464.
  3. Bianco, S., Ciocca, G., and Schettini, R. (2015). How far can you get by combining change detection algorithms? CoRR, abs/1505.02921.
  4. Bucak, S., Gunsel, B., and Guersoy, O. (2007). Incremental nonnegative matrix factorization for background modeling in surveillance video. In Signal Processing and Communications Applications, 2007. SIU 2007. IEEE 15th, pages 1-4.
  5. Bunyak, F., Palaniappan, K., Nath, S. K., and Seetharaman, G. (2007). Flux tensor constrained geodesic active contours with sensor fusion for persistent object tracking. J. Multimedia, 2(4):20-33.
  6. Gardos, T. and Monaco, J. (1999). Encoding video images using foreground/background segmentation. US Patent 5,915,044.
  7. Hu, Z., Wang, Y., Tian, Y., and Huang, T. (2011). Selective eigenbackgrounds method for background subtraction in crowed scenes. In Image Processing (ICIP), 2011 18th IEEE International Conference on, pages 3277- 3280.
  8. KaewTraKulPong, P. and Bowden, R. (2002). An improved adaptive background mixture model for realtime tracking with shadow detection. In Video-Based Surveillance Systems, pages 135-144. Springer.
  9. Marghes, C., Bouwmans, T., and Vasiu, R. (2012). Background modeling and foreground detection via a reconstructive and discriminative subspace learning approach. In Image Processing, Computer Vision, and Pattern Recognition (IPCV'12), The 2012 International Conference on, volume 02, pages 106-112.
  10. Mignotte, M. (2010). A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation. IEEE Transactions on Image Processing, 19(6):1610-1624.
  11. Nath, S. and Palaniappan, K. (2005). Adaptive robust structure tensors for orientation estimation and image segmentation. Lecture Notes in Computer Science (ISVC), 3804:445-453.
  12. Radolko, M., Farhadifard, F., Gutzeit, E., and von Lukas, U. F. (2015). Real time video segmentation optimization with a modified normalized cut. In Image and Signal Processing and Analysis (ISPA), 2015 9th International Symposium on, pages 31-36.
  13. Radolko, M., Farhadifard, F., Gutzeit, E., and von Lukas, U. F. (2016). Dataset on underwater change detection. In OCEANS 2016 - MONTEREY, pages 1-8.
  14. Radolko, M. and Gutzeit, E. (2015). Video segmentation via a gaussian switch background-model and higher order markov random fields. InProceedings of the 10th International Conference on Computer Vision Theory and Applications Volume 1, pages 537-544.
  15. Ridder, C., Munkelt, O., and Kirchner, H. (1995). Adaptive background estimation and foreground detection using kalman-filtering. In Proceedings of International Conference on recent Advances in Mechatronics, pages 193-199.
  16. Schindler, K. and Wang, H. (2006). Smooth foregroundbackground segmentation for video processing. In Proceedings of the 7th Asian Conference on Computer Vision - Volume Part II, ACCV'06, pages 581-590, Berlin, Heidelberg. Springer-Verlag.
  17. Shelley, A. J. and Seed, N. L. (1993). Approaches to static background identification and removal. InImage Processing for Transport Applications, IEE Colloquium on, pages 6/1-6/4.
  18. St-Charles, P. L., Bilodeau, G. A., and Bergevin, R. (2015). Subsense: A universal change detection method with local adaptive sensitivity. IEEE Transactions on Image Processing, 24(1):359-373.
  19. Stauffer, C. and Grimson, W. (1999). Adaptive background mixture models for real-time tracking. In Proceedings 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Vol. Two, pages 246-252. IEEE Computer Society Press.
  20. Toyama, K., Krumm, J., Brumitt, B., and Meyers, B. (1999). Wallflower: Principles and practice of background maintenance. In Seventh International Conference on Computer Vision, pages 255-261. IEEE Computer Society Press.
  21. Wang, R., Bunyak, F., Seetharaman, G., and Palaniappan, K. (2014). Static and moving object detection using flux tensor with split gaussian models. In 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 420-424.
  22. Warfield, S. K., Zou, K. H., and Wells, W. M. (2004). Simultaneous truth and performance level estimation (staple): An algorithm for the validation of image segmentation. Ieee Transactions on Medical Imaging, 23:903-921.
  23. Wren, C., Azarbayejani, A., Darrell, T., and Pentland, A. (1997). Pfinder: Real-time tracking of the human body. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19:780-785.
  24. Zivkovic, Z. (2004). Improved adaptive gaussian mixture model for background subtraction. In Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02, ICPR 7804, pages 28-31, Washington, DC, USA. IEEE Computer Society.
  25. Zivkovic, Z. and Heijden, F. (2006). Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recogn. Lett., 27(7):773- 780.
Download


Paper Citation


in Harvard Style

Radolko M., Farhadifard F. and Lukas U. (2017). Change Detection in Crowded Underwater Scenes - Via an Extended Gaussian Switch Model Combined with a Flux Tensor Pre-segmentation . In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017) ISBN 978-989-758-225-7, pages 405-415. DOI: 10.5220/0006258504050415


in Bibtex Style

@conference{visapp17,
author={Martin Radolko and Fahimeh Farhadifard and Uwe von Lukas},
title={Change Detection in Crowded Underwater Scenes - Via an Extended Gaussian Switch Model Combined with a Flux Tensor Pre-segmentation},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)},
year={2017},
pages={405-415},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006258504050415},
isbn={978-989-758-225-7},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)
TI - Change Detection in Crowded Underwater Scenes - Via an Extended Gaussian Switch Model Combined with a Flux Tensor Pre-segmentation
SN - 978-989-758-225-7
AU - Radolko M.
AU - Farhadifard F.
AU - Lukas U.
PY - 2017
SP - 405
EP - 415
DO - 10.5220/0006258504050415