loading
Papers Papers/2020

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Safa Ouerghi 1 ; Remi Boutteau 2 ; Xavier Savatier 2 and Fethi Tlili 1

Affiliations: 1 Sup’Com and GRESCOM, Tunisia ; 2 ESIGELEC and IRSEEM, France

ISBN: 978-989-758-225-7

ISSN: 2184-4321

Keyword(s): Egomotion, Structure from Motion, Robotics, CUDA, GPU.

Related Ontology Subjects/Areas/Topics: Active and Robot Vision ; Computer Vision, Visualization and Computer Graphics ; Image Formation and Preprocessing ; Image Generation Pipeline: Algorithms and Techniques ; Motion, Tracking and Stereo Vision ; Stereo Vision and Structure from Motion

Abstract: Egomotion estimation is a fundamental issue in structure from motion and autonomous navigation for mobile robots. Several camera motion estimation methods from a set of variable number of image correspondances have been proposed. Five-point methods represent the minimal number of required correspondences to estimate the essential matrix, raised special interest for their application in a hypothesize-and-test framework. This algorithm allows relative pose recovery at the expense of a much higher computational time when dealing with higher ratios of outliers. To solve this problem with a certain amount of speedup, we propose in this work, a CUDA-based solution for the essential matrix estimation performed using the Grobner basis version of 5-point algorithm, complemented with robust estimation. The description of the hardware-specific implementation considerations as well as the parallelization methods employed are given in detail. Performance analysis against existing CPU implementatio n is also given, showing a speedup 4 times faster than the CPU for an outlier ratio e = 0.5, common for the essential matrix estimation from automatically computed point correspondences. More speedup was shown when dealing with higher outlier ratios. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 34.239.179.228

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Ouerghi, S.; Boutteau, R.; Savatier, X. and Tlili, F. (2017). CUDA Accelerated Visual Egomotion Estimation for Robotic Navigation. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017) ISBN 978-989-758-225-7 ISSN 2184-4321, pages 107-114. DOI: 10.5220/0006171501070114

@conference{visapp17,
author={Safa Ouerghi. and Remi Boutteau. and Xavier Savatier. and Fethi Tlili.},
title={CUDA Accelerated Visual Egomotion Estimation for Robotic Navigation},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)},
year={2017},
pages={107-114},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006171501070114},
isbn={978-989-758-225-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)
TI - CUDA Accelerated Visual Egomotion Estimation for Robotic Navigation
SN - 978-989-758-225-7
IS - 2184-4321
AU - Ouerghi, S.
AU - Boutteau, R.
AU - Savatier, X.
AU - Tlili, F.
PY - 2017
SP - 107
EP - 114
DO - 10.5220/0006171501070114

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.
0123movie.net