A Diffusion Dimensionality Reduction Approach to Background Subtraction in Video Sequences

Dina Dushnik, Alon Schclar, Amir Averbuch, Raid Saabni, Raid Saabni


Identifying moving objects in a video sequence is a fundamental and critical task in many computer-vision applications. A common approach performs background subtraction, which identifies moving objects as the portion of a video frame that differs significantly from a background model. An effective background subtraction algorithm has to be robust to changes in the background and it should avoid detecting non-stationary background objects such as moving leaves, rain, snow, and shadows. In addition, the internal background model should quickly respond to changes in background such as objects that stop or start moving. We present a new algorithm for background subtraction in video sequences which are captured by a stationary camera. Our approach processes the video sequence as a 3D cube where time forms the third axis. The background is identified by first applying the Diffusion Bases (DB) dimensionality reduction algorithm to the time axis and then by applying an iterative method to extract the background.


Paper Citation