loading
Papers Papers/2020

Research.Publish.Connect.

Paper

Authors: Oguz Kedilioglu 1 ; Markus Lieret 1 ; Julia Schottenhamml 2 ; Tobias Würfl 2 ; Andreas Blank 1 ; Andreas Maier 2 and Jörg Franke 1

Affiliations: 1 Institute for Factory Automation and Production Systems, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91058 Erlangen, Germanyy ; 2 Pattern Recognition Lab, Friedrich-Alexander- Universität Erlangen-Nürnberg (FAU), 91058 Erlangen, Germany

ISBN: 978-989-758-488-6

ISSN: 2184-4321

Keyword(s): Image Segmentation, Object Recognition, Neural Networks, Deep Learning, Robotics, Autonomous Mobile Robots, Flexible Automation, Warehouse Automation.

Abstract: Automated guided vehicles (AGV) are nowadays a common option for the efficient and automated in-house transportation of various cargo and materials. By the additional application of unmanned aerial vehicles (UAV) in the delivery and intralogistics sector this flow of materials is expected to be extended by the third dimension within the next decade. To ensure a collision-free movement for those vehicles optical, ultrasonic or capacitive distance sensors are commonly employed. While such systems allow a collision-free navigation, they are not able to distinguish humans from static objects and therefore require the robot to move at a human-safe speed at any time. To overcome these limitations and allow an environment sensitive collision avoidance for UAVs and AGVs we provide a solution for the depth camera based real-time semantic segmentation of workers in industrial environments. The semantic segmentation is based on an adapted version of the deep convolutional neural network ( CNN) architecture FuseNet. After explaining the underlying methodology we present an automated approach for the generation of weakly annotated training data and evaluate the performance of the trained model compared to other well-known approaches (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.235.184.215

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Kedilioglu, O.; Lieret, M.; Schottenhamml, J.; Würfl, T.; Blank, A.; Maier, A. and Franke, J. (2021). RGB-D-based Human Detection and Segmentation for Mobile Robot Navigation in Industrial Environments. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, ISBN 978-989-758-488-6; ISSN 2184-4321, pages 219-226. DOI: 10.5220/0010187702190226

@conference{visapp21,
author={Oguz Kedilioglu. and Markus Lieret. and Julia Schottenhamml. and Tobias Würfl. and Andreas Blank. and Andreas Maier. and Jörg Franke.},
title={RGB-D-based Human Detection and Segmentation for Mobile Robot Navigation in Industrial Environments},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,},
year={2021},
pages={219-226},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010187702190226},
isbn={978-989-758-488-6},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,
TI - RGB-D-based Human Detection and Segmentation for Mobile Robot Navigation in Industrial Environments
SN - 978-989-758-488-6
IS - 2184-4321
AU - Kedilioglu, O.
AU - Lieret, M.
AU - Schottenhamml, J.
AU - Würfl, T.
AU - Blank, A.
AU - Maier, A.
AU - Franke, J.
PY - 2021
SP - 219
EP - 226
DO - 10.5220/0010187702190226

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.
0123movie.net