FAST LEARNABLE OBJECT TRACKING AND DETECTION IN HIGH-RESOLUTION OMNIDIRECTIONAL IMAGES

David Hurych, Karel Zimmermann, Tomáš Svoboda

2011

Abstract

This paper addresses object detection and tracking in high-resolution omnidirectional images. The foreseen application is a visual subsystem of a rescue robot equipped with an omnidirectional camera, which demands real time efficiency and robustness against changing viewpoint. Object detectors typically do not guarantee specific frame rate. The detection time may vastly depend on a scene complexity and image resolution. The adapted tracker can often help to overcome the situation, where the appearance of the object is far from the training set. On the other hand, once a tracker is lost, it almost never finds the object again. We propose a combined solution where a very efficient tracker (based on sequential linear predictors) incrementally accommodates varying appearance and speeds up the whole process. We experimentally show that the performance of the combined algorithm, measured by a ratio between false positives and false negatives, outperforms both individual algorithms. The tracker allows to run the expensive detector only sparsely enabling the combined solution to run in real-time on 12 MPx images from a high resolution omnidirectional camera (Ladybug3).

References

  1. Baker, S. and Matthews, I. (2004). Lucas-kanade 20 years on: A unifying framework. In International Journal of Computer Vision, volume 56, pages 221-255.
  2. Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2006). Speeded-up robust features. In Proceedings of IEEE European Conference on Computer Vision, pages 404-417.
  3. Dellaert, F. and Collins, R. (1999). Fast image-based tracking by selective pixel integration. In Proceedings of the International Conference on Computer Vision: Workshop of Frame-Rate Vision, pages 1-22.
  4. Hager, G. D. and Belhumeur, P. N. (1998). Efficient region tracking with parametric models of geometry and illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(10):1025-1039.
  5. Harris, C. and Stephen, M. (1988). A combined corner and edge detection. In Matthews, M. M., editor, Proceedings of the 4th ALVEY vision conference, pages 147-151, University of Manchaster, England. on-line copies available on the web.
  6. Hinterstoisser, S., Benhimane, S., Navab, N., Fua, P., and Lepetit, V. (2008). Online learning of patch perspective rectification for efficient object detection. In Conference on Computer Vision and Pattern Recognition, pages 1-8.
  7. Holzer, S., Ilic, S., and Navab, N. (2010). Adaptive linear predictors for real-time tracking. In Conference on Computer Vision and Pattern Recognition (CVPR), 2010 IEEE, pages 1807-1814.
  8. Hurych, D. and Svoboda, T. (2010). Incremental learning and validation of sequential predictors in video browsing application. In VISIGRAPP 2010: International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, volume 1, pages 467-474.
  9. Jurie, F. and Dhome, M. (2002). Hyperplane approximation for template matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24:996-1000.
  10. Li, M., Chen, W., Huang, K., and Tan, T. (2010). Visual tracking via incremental self-tuning particle filtering on the affine group. In Conference on Computer Vision and Pattern Recognition (CVPR), 2010 IEEE, pages 1315-1322.
  11. Lowe, D. (2004). Distinctive image features from scaleinvariant keypoints. International Journal on Computer Vision, 60(2):91-110.
  12. Lucas, B. and Kanade, T. (1981). An iterative image registration technique with an application in stereo vision. In Proceedings of the 7th International Conference on Artificial Intelligence, pages 674-679.
  13. Ozuysal, M., Calonder, M., Lepetit, V., and Fua, P. (2010). Fast keypoint recognition using random ferns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(3):448-461.
  14. Shi, J. and Tomasi, C. (1994). Good features to track. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 593-600.
  15. S?ochman, J. and Matas, J. (2005). Waldboost - learning for time constrained sequential detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 150-157.
  16. Zimmermann, K., Matas, J., and Svoboda, T. (2009a). Tracking by an optimal sequence of linear predictors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(4):677-692.
  17. Zimmermann, K., Svoboda, T., and Matas, J. (2009b). Anytime learning for the NoSLLiP tracker. Image and Vision Computing, Special Issue: Perception Action Learning, 27(11):1695-1701.
Download


Paper Citation


in Harvard Style

Hurych D., Zimmermann K. and Svoboda T. (2011). FAST LEARNABLE OBJECT TRACKING AND DETECTION IN HIGH-RESOLUTION OMNIDIRECTIONAL IMAGES . In Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2011) ISBN 978-989-8425-47-8, pages 521-530. DOI: 10.5220/0003369705210530


in Bibtex Style

@conference{visapp11,
author={David Hurych and Karel Zimmermann and Tomáš Svoboda},
title={FAST LEARNABLE OBJECT TRACKING AND DETECTION IN HIGH-RESOLUTION OMNIDIRECTIONAL IMAGES},
booktitle={Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2011)},
year={2011},
pages={521-530},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003369705210530},
isbn={978-989-8425-47-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2011)
TI - FAST LEARNABLE OBJECT TRACKING AND DETECTION IN HIGH-RESOLUTION OMNIDIRECTIONAL IMAGES
SN - 978-989-8425-47-8
AU - Hurych D.
AU - Zimmermann K.
AU - Svoboda T.
PY - 2011
SP - 521
EP - 530
DO - 10.5220/0003369705210530