Dual-mode Detection for Foreground Segmentation in Low-contrast Video Images

Du-Ming Tsai, Wei-Yao Chiu

2013

Abstract

In video surveillance, the detection of foreground objects in an image sequence from a still camera is critical for object tracking, activity recognition, and behavior understanding. In this paper, a dual-mode scheme for foreground segmentation is proposed. The mode is based on the most frequently occurring gray level of observed consecutive image frames, and is used to represent the background in the scene. In order to accommodate the dynamic changes of a background, the proposed method uses a dual-mode model for background representation. The dual-mode model can represent two main states of the background and detect a more complete silhouette of the foreground object in the dynamic background. The proposed method can promptly calculate the exact gray-level mode of individual pixels in image sequences by simply dropping the last image frame and adding the current image in an observed period. The comparative evaluation of foreground segmentation methods is performed on the Microsoft’s Wallflower dataset. The results show that the proposed method can quickly respond to illumination changes and well extract foreground objects in a low-contrast background.

References

  1. Toyama K, Krumm J, Brumitt B, Meyers B. 1999, Wallflower: Principles and practice of background maintenance. Inter Conf on Computer Vision 1999: 255-261, Corfu, Greece, September.
  2. Wren, C. R., Azarbayejani, A., Darrell, T., Pentland, A. P., 1997, Pfinder: real-time tracking of the human body, IEEE Trans. Pattern Analysis and Machine Intelligence, 19 (7), pp. 780-785.
  3. Stauffer, C., Grimson, W. E. L., 2000, Learning patterns of activity using real-time tracking, IEEE Trans. Pattern Analysis and Machine Intelligence, 22 (8), pp. 747-757.
  4. Elgammal, A., Duraiswami, R., Davis, L., 2003, Efficient kernel density estimation using the Fast Gauss Transform with applications to color modeling and tracking, IEEE Trans. Pattern Analysis and Machine Intelligence, 25 (11), pp. 1499-1504.
  5. Zhao, X., Satoh, Y., Takauji, H., Kaneko, S., Iwata, K., Ozaki, R., 2011, Object detection based on a robust and accurate statistical multi-point-pair model, Pattern Recognition, 44(6), pp. 1296-1311.
Download


Paper Citation


in Harvard Style

Tsai D. and Chiu W. (2013). Dual-mode Detection for Foreground Segmentation in Low-contrast Video Images . In Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2013) ISBN 978-989-8565-47-1, pages 431-435. DOI: 10.5220/0004216004310435


in Bibtex Style

@conference{visapp13,
author={Du-Ming Tsai and Wei-Yao Chiu},
title={Dual-mode Detection for Foreground Segmentation in Low-contrast Video Images},
booktitle={Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2013)},
year={2013},
pages={431-435},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004216004310435},
isbn={978-989-8565-47-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2013)
TI - Dual-mode Detection for Foreground Segmentation in Low-contrast Video Images
SN - 978-989-8565-47-1
AU - Tsai D.
AU - Chiu W.
PY - 2013
SP - 431
EP - 435
DO - 10.5220/0004216004310435