Quantitative Comparison of Affine Invariant Feature Matching

Zoltán Pusztai, Levente Hajder

Abstract

It is a key problem in computer vision to apply accurate feature matchers between images. Thus the comparison of such matchers is essential. There are several survey papers in the field, this study extends one of those: the aim of this paper is to compare competitive techniques on the ground truth (GT) data generated by our structured-light 3D scanner with a rotating table. The discussed quantitative comparison is based on real images of six rotating 3D objects. The rival detectors in the comparison are as follows: Harris-Laplace, Hessian-Laplace, Harris-Affine, Hessian-Affine, IBR, EBR, SURF, and MSER.

References

  1. Agrawal, M. and Konolige, K. (2008). Censure: Center surround extremas for realtime feature detection and matching. In ECCV.
  2. Alcantarilla, P. F., Bartoli, A., and Davison, A. J. (2012). Kaze features. In Proceedings of the 12th European Conference on Computer Vision, pages 214-227.
  3. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M., and Szeliski, R. (2011). A database and evaluation methodology for optical flow. International Journal of Computer Vision, 92(1):1-31.
  4. Bay, H., Ess, A., Tuytelaars, T., and Gool, L. J. V. (2008). Speeded-up robust features (SURF). Computer Vision and Image Understanding, 110(3):346-359.
  5. Beaudet, P. (1978). Rotational invariant image operators. Proceedings of the 4th International Conference on Pattern Recognition, pages 579-583.
  6. Canny, J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence.
  7. F örstner, W. and G ülch, E. (1987). A Fast Operator for Detection and Precise Location of Distinct Points, Corners and Centres of Circular Features.
  8. Grauman, K. and Leibe, B. (2011). Visual Object Recognition. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers.
  9. Harris, C. and Stephens, M. (1988). A combined corner and edge detector. In In Proc. of Fourth Alvey Vision Conference, pages 147-151.
  10. Leutenegger, S., Chli, M., and Siegwart, R. Y. (2011). Brisk: Binary robust invariant scalable keypoints. In Proceedings of the 2011 International Conference on Computer Vision, ICCV 7811, pages 2548-2555.
  11. Lowe, D. G. (1999). Object recognition from local scaleinvariant features. In Proceedings of the International Conference on Computer Vision, ICCV 7899, pages 1150-1157.
  12. Lowe, D. G. (2004). Distinctive image features from scaleinvariant keypoints. International Journal of Computer Vision, 60(2):91-110.
  13. Mair, E., Hager, G. D., Burschka, D., Suppa, M., and Hirzinger, G. (2010). Adaptive and generic corner detection based on the accelerated segment test. In Proceedings of the 11th European Conference on Computer Vision: Part II, pages 183-196.
  14. Matas, J., Chum, O., Urban, M., and Pajdla, T. (2002). Robust wide baseline stereo from maximally stable extremal regions. In Proc. BMVC, pages 36.1-36.10. doi:10.5244/C.16.36.
  15. Mikolajczyk, K. and Schmid, C. (2002). An affine invariant interest point detector. In Proceedings of the 7th European Conference on Computer Vision-Part I, ECCV 7802, pages 128-142, London, UK, UK. SpringerVerlag.
  16. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., and Gool, L. V. (2005). A comparison of affine region detectors. International Journal of Computer Vision, 65(1):43-72.
  17. Morel, J.-M. and Yu, G. (2009). ASIFT: A new framework for fully affine invariant image comparison. SIAM Journal on Imaging Sciences, 2(2):438-469.
  18. Pablo Alcantarilla (Georgia Institute of Technology), Jesus Nuevo (TrueVision Solutions AU), A. B. (2013). Fast explicit diffusion for accelerated features in nonlinear scale spaces. In Proceedings of the British Machine Vision Conference. BMVA Press.
  19. Pal, C. J., Weinman, J. J., Tran, L. C., and Scharstein, D. (2012). On learning conditional random fields for stereo - exploring model structures and approximate inference. International Journal of Computer Vision, 99(3):319-337.
  20. Pusztai, Z. and Hajder, L. (2016a). Quantitative Comparison of Feature Matchers Implemented in OpenCV3. In Computer Vision Winter Workshop. vailable online at http://vision.fe.unilj.si/cvww2016/proceedings/papers/04.pdf.
  21. Pusztai, Z. and Hajder, L. (2016b). A turntable-based approach for ground truth tracking data generation. VISAPP, pages 498-509.
  22. Rosten, E. and Drummond, T. (2005). Fusing points and lines for high performance tracking. In In Internation Conference on Computer Vision, pages 1508-1515.
  23. Rublee, E., Rabaud, V., Konolige, K., and Bradski, G. (2011). Orb: An efficient alternative to sift or surf. In International Conference on Computer Vision.
  24. Scharstein, D., Hirschmüller, H., Kitajima, Y., Krathwohl, G., Nesic, N., Wang, X., and Westling, P. (2014). High-resolution stereo datasets with subpixel-accurate ground truth. In Pattern Recognition - 36th German Conference, GCPR 2014, Münster, Germany, September 2-5, 2014, Proceedings, pages 31-42.
  25. Scharstein, D. and Szeliski, R. (2002). A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47:7-42.
  26. Scharstein, D. and Szeliski, R. (2003). High-accuracy stereo depth maps using structured light. In CVPR (1), pages 195-202.
  27. Tomasi, C. and Shi, J. (1994). Good Features to Track. In IEEE Conf. Computer Vision and Pattern Recognition, pages 593-600.
  28. Tuytelaars, T. and Gool, L. V. (2000). Wide baseline stereo matching based on local, affinely invariant regions. In In Proc. BMVC, pages 412-425.
  29. Tuytelaars, T. and Van Gool, L. (2004). Matching widely separated views based on affine invariant regions. Int. J. Comput. Vision, 59(1):61-85.
  30. Wu, J., Cui, Z., Sheng, V., Zhao, P., Su, D., and Gong, S. (2013). A comparative study of sift and its variants. Measurement Science Review, 13(3):122-131.
  31. Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell., 22(11):1330-1334.
Download


Paper Citation


in Harvard Style

Pusztai Z. and Hajder L. (2017). Quantitative Comparison of Affine Invariant Feature Matching . In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 6: VISAPP, (VISIGRAPP 2017) ISBN 978-989-758-227-1, pages 515-522. DOI: 10.5220/0006263005150522


in Bibtex Style

@conference{visapp17,
author={Zoltán Pusztai and Levente Hajder},
title={Quantitative Comparison of Affine Invariant Feature Matching},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 6: VISAPP, (VISIGRAPP 2017)},
year={2017},
pages={515-522},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006263005150522},
isbn={978-989-758-227-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 6: VISAPP, (VISIGRAPP 2017)
TI - Quantitative Comparison of Affine Invariant Feature Matching
SN - 978-989-758-227-1
AU - Pusztai Z.
AU - Hajder L.
PY - 2017
SP - 515
EP - 522
DO - 10.5220/0006263005150522