CUDA Accelerated Visual Egomotion Estimation for Robotic Navigation

Safa Ouerghi, Remi Boutteau, Xavier Savatier, Fethi Tlili

Abstract

Egomotion estimation is a fundamental issue in structure from motion and autonomous navigation for mobile robots. Several camera motion estimation methods from a set of variable number of image correspondances have been proposed. Five-point methods represent the minimal number of required correspondences to estimate the essential matrix, raised special interest for their application in a hypothesize-and-test framework. This algorithm allows relative pose recovery at the expense of a much higher computational time when dealing with higher ratios of outliers. To solve this problem with a certain amount of speedup, we propose in this work, a CUDA-based solution for the essential matrix estimation performed using the Grobner basis version of 5-point algorithm, complemented with robust estimation. The description of the hardware-specific implementation considerations as well as the parallelization methods employed are given in detail. Performance analysis against existing CPU implementation is also given, showing a speedup 4 times faster than the CPU for an outlier ratio e = 0.5, common for the essential matrix estimation from automatically computed point correspondences. More speedup was shown when dealing with higher outlier ratios.

References

  1. Chang, C.-C. and Lin, C.-J. (2011). Libsvm: A library for support vector machines. volume 2, pages 1-27.
  2. Dissanayake, M. W. M. G., Newman, P., Clark, S., Durrantwhyte, H. F., and Csorba, M. (2001). A solution to the simultaneous localization and map building (slam) problem. In IEEE Transactions on Robotics and Automation, volume 17, pages 229-241.
  3. Durrant-Whyte, H. and Bailey, T. (2006). Simultaneous localisation and mapping (slam): Part i the essential algorithms. volume 2, page 2006.
  4. Fischler, M. A. and Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. volume 24, pages 381-395.
  5. Hartley, R. I. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition.
  6. I. Comport, A., Malis, E., and Rives, P. (2010). Real-time quadrifocal visual odometry. volume 29, pages 245- 266.
  7. Kneip, L. and Furgale, P. (2014). Opengv: A unified and generalized approach to real-time calibrated geometric vision.
  8. Li, B., Zhang, X., and Sato, M. (2014). Pitch angle estimation using a vehicle-mounted monocular camera for range measurement. volume 28, pages 1161-1168.
  9. Lindholm, E., Nickolls, J., Oberman, S., and Montrym, J. (2008). Nvidia tesla: A unified graphics and computing architecture. volume 28, pages 39-55.
  10. Maimone, M., Cheng, Y., and Matthies, L. (2007). Two years of visual odometry on the mars exploration rovers. volume 24, page 2007.
  11. Nister, D. (2004). An efficient solution to the five-point relative pose problem. volume 26, pages 756-777.
  12. Nister, D., Naroditsky, O., and Bergen, J. (2006). Visual odometry for ground vehicle applications. volume 23, page 2006.
  13. NVIDIA (2015). Cublas documentation. http://docs.nvidia.com/cuda/cublas/. Online.
  14. Stewenius, D. H., Engels, C., and Nistr, D. D. (2006). Recent developments on direct relative orientation. volume 60, pages 284-294.
  15. Stewenius, H. and Engels, C. (2008). Matlab code for solving the fivepoint problem. http://vis.uky.edu/˜stewe/FIVEPOINT/. Online.
  16. Wu, C., Agarwal, S., Curless, B., and Seitz, S. (2011). Multicore bundle adjustment. pages 3057-3064.
  17. Yonglong, Z., Kuizhi, M., Xiang, J., and Peixiang, D. (2013). Parallelization and optimization of sift on gpu using cuda. pages 1351-1358.
Download


Paper Citation


in Harvard Style

Ouerghi S., Boutteau R., Savatier X. and Tlili F. (2017). CUDA Accelerated Visual Egomotion Estimation for Robotic Navigation . In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017) ISBN 978-989-758-225-7, pages 107-114. DOI: 10.5220/0006171501070114


in Bibtex Style

@conference{visapp17,
author={Safa Ouerghi and Remi Boutteau and Xavier Savatier and Fethi Tlili},
title={CUDA Accelerated Visual Egomotion Estimation for Robotic Navigation},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)},
year={2017},
pages={107-114},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006171501070114},
isbn={978-989-758-225-7},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)
TI - CUDA Accelerated Visual Egomotion Estimation for Robotic Navigation
SN - 978-989-758-225-7
AU - Ouerghi S.
AU - Boutteau R.
AU - Savatier X.
AU - Tlili F.
PY - 2017
SP - 107
EP - 114
DO - 10.5220/0006171501070114