Authors:
Sergio Cebollada
1
;
Luis Payá
1
;
David Valiente
1
;
Xiaoyi Jiang
2
and
Oscar Reinoso
1
Affiliations:
1
Department of Systems Engineering and Automation, Miguel Hernández University, Elche, 03202 and Spain
;
2
Department of Computer Science, University of Münster, Münster, 48149 and Germany
Keyword(s):
Mobile Robots, Omnidirectional Images, Global Appearance Descriptors, Localization, Deep Learning.
Related
Ontology
Subjects/Areas/Topics:
Autonomous Agents
;
Image Processing
;
Informatics in Control, Automation and Robotics
;
Mobile Robots and Autonomous Systems
;
Robotics and Automation
Abstract:
In this work, different global appearance descriptors are evaluated to carry out the localization task, which is a crucial skill for autonomous mobile robots. The unique information source used to solve this issue is an omnidirectional camera. Afterwards, the images captured are processed to obtain global appearance descriptors. The position of the robots is estimated by comparing the descriptors contained in the visual model and the descriptor calculated for the test image. The descriptors evaluated are based on (1) analytic methods (HOG and gist) and (2) deep learning techniques (auto-encoders and Convolutional Neural Networks). The localization is tested with a panoramic dataset which provides indoor environments under real operating conditions. The results show that deep learning based descriptors can be also an interesting solution to carry out visual localization tasks.