Visual Tracking with Similarity Matching Ratio

Aysegul Dundar, Jonghoon Jin, Eugenio Culurciello

2013

Abstract

This paper presents a novel approach to visual tracking: Similarity Matching Ratio (SMR). The traditional approach of tracking is minimizing some measures of the difference between the template and a patch from the frame. This approach is vulnerable to outliers and drastic appearance changes and an extensive study is focusing on making the approach more tolerant to them. However, this often results in longer, corrective algorithms which do not solve the original problem. This paper proposes a novel approach to the definition of the tracking problems, SMR, which turns the differences into probability measures. Only pixel differences below a threshold count towards deciding the match, the rest are ignored. This approach makes the SMR tracker robust to outliers and points that dramatically change appearance. The SMR tracker is tested on challenging video sequences and achieves state-of-the-art performance.

References

  1. Avidan, S. (2007). Ensemble tracking. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(2):261-271.
  2. Babenko, B., Yang, M., and Belongie, S. (2009). Visual tracking with online multiple instance learning. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 983-990. IEEE.
  3. Black, M. and Jepson, A. (1998). Eigentracking: Robust matching and tracking of articulated objects using a view-based representation. International Journal of Computer Vision, 26(1):63-84.
  4. Blaser, E., Pylyshyn, Z., Holcombe, A., et al. (2000). Tracking an object through feature space. Nature, 408(6809):196-198.
  5. Collins, R., Liu, Y., and Leordeanu, M. (2005). Online selection of discriminative tracking features. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 27(10):1631-1643.
  6. Comaniciu, D., Ramesh, V., and Meer, P. (2003). Kernelbased object tracking. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(5):564-577.
  7. DiCarlo, J., Zoccolan, D., and Rust, N. (2012). How does the brain solve visual object recognition? Neuron, 73(3):415-434.
  8. Hager, G. and Belhumeur, P. (1998). Efficient region tracking with parametric models of geometry and illumination. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20(10):1025-1039.
  9. Ishikawa, T., Matthews, I., and Baker, S. (2002). Efficient image alignment with outlier rejection. Citeseer.
  10. Kalal, Z., Matas, J., and Mikolajczyk, K. (2010a). P-N Learning: Bootstrapping Binary Classifiers by Structural Constraints. Conference on Computer Vision and Pattern Recognition.
  11. Kalal, Z., Mikolajczyk, K., and Matas, J. (2010b). Forwardbackward error: Automatic detection of tracking failures. In Pattern Recognition (ICPR), 2010 20th International Conference on, pages 2756-2759. IEEE.
  12. LeCun, Y., Huang, F., and Bottou, L. (2004). Learning methods for generic object recognition with invariance to pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II-97. IEEE.
  13. Lim, J., Ross, D., Lin, R., and Yang, M. (2004). Incremental learning for visual tracking. Advances in neural information processing systems, 17:793-800.
  14. Lucas, B. and Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th international joint conference on Artificial intelligence.
  15. Sermanet, P., Kavukcuoglu, K., and LeCun, Y. (2011). Traffic signs and pedestrians vision with multi-scale convolutional networks. Snowbird Machine Learning Workshop.
  16. Serre, T., Wolf, L., Bileschi, S., Riesenhuber, M., and Poggio, T. (2007). Robust object recognition with cortexlike mechanisms. IEEE Trans. Pattern Anal. Mach. Intell., 29:411-426.
  17. Shi, J. and Tomasi, C. (1994). Good features to track. In Computer Vision and Pattern Recognition, 1994. Proceedings CVPR'94., 1994 IEEE Computer Society Conference on, pages 593-600. IEEE.
  18. Thorpe, S., Fize, D., Marlot, C., et al. (1996). Speed of processing in the human visual system. nature, 381(6582):520-522.
  19. Vintch, B., Movshon, J. A., and Simoncelli, E. P. (2010). Characterizing receptive field structure of macaque v2 neurons in terms of their v1 afferents. Annual meeting in Neuroscience.
  20. Wilmer, J. and Nakayama, K. (2007). Two distinct visual motion mechanisms for smooth pursuit: Evidence from individual differences. Neuron, 54(6):987-1000.
  21. Yilmaz, A., Javed, O., and Shah, M. (2006). Object tracking: A survey. Acm Computing Surveys (CSUR), 38(4):13.
Download


Paper Citation


in Harvard Style

Dundar A., Jin J. and Culurciello E. (2013). Visual Tracking with Similarity Matching Ratio . In Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2013) ISBN 978-989-8565-48-8, pages 280-285. DOI: 10.5220/0004288602800285


in Bibtex Style

@conference{visapp13,
author={Aysegul Dundar and Jonghoon Jin and Eugenio Culurciello},
title={Visual Tracking with Similarity Matching Ratio},
booktitle={Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2013)},
year={2013},
pages={280-285},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004288602800285},
isbn={978-989-8565-48-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2013)
TI - Visual Tracking with Similarity Matching Ratio
SN - 978-989-8565-48-8
AU - Dundar A.
AU - Jin J.
AU - Culurciello E.
PY - 2013
SP - 280
EP - 285
DO - 10.5220/0004288602800285