loading
Papers

Research.Publish.Connect.

Paper

Authors: Yifei Zhang ; Olivier Morel ; Marc Blanchon ; Ralph Seulin ; Mojdeh Rastgoo and Désiré Sidibé

Affiliation: ImViA Laboratory EA 7535, ERL VIBOT CNRS 6000, Université de Bourgogne Franche-Comté and France

ISBN: 978-989-758-354-4

Keyword(s): Semantic Segmentation, Multimodal Fusion, Deep Learning, Road Scenes.

Related Ontology Subjects/Areas/Topics: Applications ; Computer Vision, Visualization and Computer Graphics ; Image and Video Analysis ; Image Formation and Preprocessing ; Multimodal and Multi-Sensor Models of Image Formation ; Pattern Recognition ; Robotics ; Segmentation and Grouping ; Software Engineering

Abstract: Deep neural networks have been frequently used for semantic scene understanding in recent years. Effective and robust segmentation in outdoor scene is prerequisite for safe autonomous navigation of autonomous vehicles. In this paper, our aim is to find the best exploitation of different imaging modalities for road scene segmentation, as opposed to using a single RGB modality. We explore deep learning-based early and later fusion pattern for semantic segmentation, and propose a new multi-level feature fusion network. Given a pair of aligned multimodal images, the network can achieve faster convergence and incorporate more contextual information. In particular, we introduce the first-of-its-kind dataset, which contains aligned raw RGB images and polarimetric images, followed by manually labeled ground truth. The use of polarization cameras is a sensory augmentation that can significantly enhance the capabilities of image understanding, for the detection of highly reflective areas such a s glasses and water. Experimental results suggest that our proposed multimodal fusion network outperforms unimodal networks and two typical fusion architectures. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.95.131.208

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Zhang, Y.; Morel, O.; Blanchon, M.; Seulin, R.; Rastgoo, M. and Sidibé, D. (2019). Exploration of Deep Learning-based Multimodal Fusion for Semantic Road Scene Segmentation.In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5 VISAPP: VISAPP, ISBN 978-989-758-354-4, pages 336-343. DOI: 10.5220/0007360403360343

@conference{visapp19,
author={Yifei Zhang. and Olivier Morel. and Blanchon, M. and Ralph Seulin. and Mojdeh Rastgoo. and Désiré Sidibé.},
title={Exploration of Deep Learning-based Multimodal Fusion for Semantic Road Scene Segmentation},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5 VISAPP: VISAPP,},
year={2019},
pages={336-343},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007360403360343},
isbn={978-989-758-354-4},
}

TY - CONF

JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5 VISAPP: VISAPP,
TI - Exploration of Deep Learning-based Multimodal Fusion for Semantic Road Scene Segmentation
SN - 978-989-758-354-4
AU - Zhang, Y.
AU - Morel, O.
AU - Blanchon, M.
AU - Seulin, R.
AU - Rastgoo, M.
AU - Sidibé, D.
PY - 2019
SP - 336
EP - 343
DO - 10.5220/0007360403360343

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.