PERFORMANCE EVALUATION OF FEATURE DETECTION FOR LOCAL OPTICAL FLOW TRACKING

Tobias Senst, Brigitte Unger, Ivo Keller, Thomas Sikora

2012

Abstract

Due to its high computational efficiency the Kanade Lucas Tomasi feature tracker is still widely accepted and a utilized method to compute sparse motion fields or trajectories in video sequences. This method is made up of a Good Feature To Track feature detection and a pyramidal Lucas Kanade feature tracking algorithm. It is well known that the Good Feature To Track takes into account the Aperture Problem, but it does not consider the Generalized Aperture Problem. In this paper we want to provide an evaluation of a set of alternative feature detection methods. These methods are taken from feature matching techniques like FAST, SIFT and MSER. The evaluation is based on the Middlebury dataset and performed by using an improved pyramidal Lucas Kanade method, called RLOF feature tracker. To compare the results of the feature detector and RLOF pair, we propose a methodology based on accuracy, efficiency and covering measurements.

References

  1. Agrawal, M., Konolige, K., and Blas, M. (2008). Censure: Center surround extremas for realtime feature detection and matching. In European Conference on Computer Vision (ECCV 2008), pages 102-115.
  2. Ali, S. and Shah, M. (2007). A lagrangian particle dynamics approach for crowd flow segmentation and stability analysis. In Computer Vision and Pattern Recognition (CVPR 2007), pages 1 -6.
  3. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M. J., and Szeliski, R. (2009). A database and evaluation methodology for optical flow. Technical report MSRTR-2009-179, Microsoft Research.
  4. Bay, H., Ess, A., Tuytelaars, T., and Gool, L. V. (2008). Surf: Speeded up robust features. Computer Vision and Image Understanding, 110(3):346-359.
  5. Bouguet, J.-Y. (2000). Pyramidal implementation of the lucas kanade feature tracker. Technical report, Intel Corporation Microprocessor Research Lab.
  6. Brostow, G. and Cipolla, R. (2006). Unsupervised bayesian detection of independent motion in crowds. In Computer Vision and Pattern Recognition (CVPR 2006), pages 594-601.
  7. Fradet, M., Robert, P., and Pérez, P. (2009). Clustering point trajectories with various life-spans. In Conference for Visual Media Production (CVMP 2009), pages 7-14.
  8. Hu, M., Ali, S., and Shah, M. (2008). Detecting global motion patterns in complex videos. In International Conference on Pattern Recognition (2008), pages 1 - 5.
  9. Jonathan and Zhang, H. (2007). Quantitative evaluation of feature extractors for visual slam. In Canadian Conference on Computer and Robot Vision (CRV 2007), pages 157-164.
  10. Lowe, D. (1999). Object recognition from local scaleinvariant features. In International Conference on Computer Vision (ICCV 1999), pages 1150-1157.
  11. Lucas, B. D. and Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. In International Joint Conference on Artificial Intelligence (IJCAI 1981), pages 674-679.
  12. Matas, J., Chum, O., Urban, M., and Pajdla, T. (2002). Robust wide baseline stereo from maximally stable extremal regions. In British Machine Vision Conference (BMVC 2002), pages 384-393.
  13. Otte, M. and Nagel, H.-H. (1994). Optical flow estimation: Advances and comparisons. In European Conference on Computer Vision (ECCV 1994), pages 49-60.
  14. Rehrmann, V. and Priese, L. (1997). Fast and robust segmentation of natural color scenes. In Asian Conference on Computer Vision (ACCV 1997), pages 598- 606.
  15. Rosten, E. and Drummond, T. (2006). Machine learning for high-speed corner detection. In European Conference on Computer Vision (ECCV 2006), pages 430-443.
  16. Senst, T., Eiselein, V., Heras Evangelio, R., and Sikora, T. (2011). Robust modified L2 local optical flow estimation and feature tracking. In IEEE Workshop on Motion and Video Computing (WMVC 2011), pages 685-690.
  17. Shi, J. and Tomasi., C. (1994). Good features to track. In Computer Vision and Pattern Recognition (CVPR 1994), pages 593 -600.
  18. Sinha, S. N., Frahm, J.-M., Pollefeys, M., and Genc, Y. (2006). Gpu-based video feature tracking and matching. Technical report 06-012, UNC Chapel Hill.
  19. Tok, M., Glantz, A., Krutz, A., and Sikora, T. (2011). Feature-based global motion estimation using the helmholtz principle. In International Conference on Acoustics Speech and Signal Processing (ICASSP 2011), pages 1561-1564.
  20. Tomasi, C. and Kanade, T. (1991). Detection and tracking of point features. Technical report CMU-CS-91-132, CMU.
  21. Tuytelaars, T. and Mikolajczyk, K. (2008). Local invariant feature detectors: a survey. Foundations and Trends in Computer Vision Graphics Vision., 3:177-280.
  22. Zach, C., Gallup, D., and Frahm, J. (2008). Fast gainadaptive klt tracking on the gpu. In Visual Computer Vision on GPUs workshop (CVGPU 08), pages 1-7.
Download


Paper Citation


in Harvard Style

Senst T., Unger B., Keller I. and Sikora T. (2012). PERFORMANCE EVALUATION OF FEATURE DETECTION FOR LOCAL OPTICAL FLOW TRACKING . In Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods - Volume 2: ICPRAM, ISBN 978-989-8425-99-7, pages 303-309. DOI: 10.5220/0003731103030309


in Bibtex Style

@conference{icpram12,
author={Tobias Senst and Brigitte Unger and Ivo Keller and Thomas Sikora},
title={PERFORMANCE EVALUATION OF FEATURE DETECTION FOR LOCAL OPTICAL FLOW TRACKING},
booktitle={Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods - Volume 2: ICPRAM,},
year={2012},
pages={303-309},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003731103030309},
isbn={978-989-8425-99-7},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods - Volume 2: ICPRAM,
TI - PERFORMANCE EVALUATION OF FEATURE DETECTION FOR LOCAL OPTICAL FLOW TRACKING
SN - 978-989-8425-99-7
AU - Senst T.
AU - Unger B.
AU - Keller I.
AU - Sikora T.
PY - 2012
SP - 303
EP - 309
DO - 10.5220/0003731103030309