
IMPLICIT TRACKING OF MULTIPLE OBJECTS BASED ON 
BAYESIAN REGION LABEL ASSIGNMENT 
Masaya Ikeda, Kan Okubo and Norio Tagawa 
 Faculty of System Design, Tokyo Metropolitan University, Asahigaoka 6-6, Hino, Tokyo, Japan 
Keywords:  Object tracking, MAP assignment, Occlusion, Optical flow. 
Abstract:  For tracking objects, the various template matching methods are usually used. However, those cannot 
completely cope with apparent changes of a target object in images. On the other hand, to discriminate 
multiple objects in still images, the label assignment based on the MAP estimation using object's features is 
convenient. In this study, we propose a method which enables to track multiple objects stably without 
explicit tracking by extending the above MAP assignment in the temporal direction. We propose two 
techniques; information of target position and its size detected in the previous frame is propagated to the 
current frame as a prior probability of the target region, and distribution properties of target’s feature values 
in a feature space are adaptively updated based on detection results at each frame. Since the proposed 
method is based on a label assignment and then, it is not an explicit tracking based on target appearance in 
images, the method is robust especially for occlusion.  
1 INTRODUCTION 
Moving objects detection and tracking have been 
studied successfully up to now as a fundamental 
technology of an image sequence processing. For 
tracking objects, the various template matching 
methods are usually used. The template matching 
method using the intensity pattern of the object 
region detected in the previous frame as a template 
can detect moving regions directly in the next frame. 
Hence, such the method is effective under the 
condition that target’s shape doesn’t change in 
images. However, it is difficult to track it stably if its 
shape changes drastically in images in the cases that 
motion of target object has a component of view 
direction and/or occlusion arises. Some methods 
have been proposed to avoid these shortcomings 
(
Harville et al., 1999, Dowson and Bowden, 2008), but 
those are not pragmatic methods from the view 
points of complexity and so on.  
Using the background subtraction and/or the 
temporal subtraction, moving regions can be 
detected. (
Stauffer and Grimson, 1999). However, the 
tracking procedure is required so as to discriminate 
identical region from multiple moving regions. 
Therefore, the methods, which are based on the 
region detection using object’s features without an 
explicit tracking, draw attention. (
Kamijo et al., 2001).  
These methods can discriminate multiple objects 
respectively using object’s features. Object’s motion 
is usually used as a feature. However, the target 
objects having the same motion can not be 
discriminated by motion. Even if other features are 
also used, the same ambiguity can not be eliminated.  
In this study, we construct a method which 
enables to stably track multiple objects implicitly, by 
extending the above MAP assignment for image 
sequences. In this method, 2-D motion is used as a 
feature of objects. Additionally, to avoid the above 
mentioned ambiguity caused by adopting single 
feature, information of the target position and its size 
detected in the previous frame is propagated to the 
current frame as a prior probability of target region. 
In this framework, occlusion is adaptively processed 
with low cost, although recently the particle filter 
has been successfully applied to an explicit tracking 
to exactly treat occlusion. (Särkkä  et al., 2007) 
2  OUTLINE OF PROPOSITION 
In the proposed method, image sequence is treated 
as a set of successive still images and each image is 
divided into local small regions. Hence, objects and 
background is assumed to be a set of these regions. 
Label number assigned for each region shows which 
503
Ikeda M., Okubo K. and Tagawa N. (2009).
IMPLICIT TRACKING OF MULTIPLE OBJECTS BASED ON BAYESIAN REGION LABEL ASSIGNMENT.
In Proceedings of the Fourth International Conference on Computer Vision Theory and Applications, pages 503-506
DOI: 10.5220/0001796905030506
Copyright
c
 SciTePress