Achieving RGB-D Level Segmentation Performance from a Single ToF Camera

Pranav Sharma, Jigyasa Katrolia, Jason Rambach, Bruno Mirbach, Didier Stricker, Didier Stricker

2024

Abstract

Depth is a very important modality in computer vision, typically used as complementary information to RGB, provided by RGB-D cameras. In this work, we show that it is possible to obtain the same level of accuracy as RGB-D cameras on a semantic segmentation task using infrared (IR) and depth images from a single Time-of-Flight (ToF) camera. In order to fuse the IR and depth modalities of the ToF camera, we introduce a method utilizing depth-specific convolutions in a multi-task learning framework. In our evaluation on an in-car segmentation dataset, we demonstrate the competitiveness of our method against the more costly RGB-D approaches.

Download


Paper Citation


in Harvard Style

Sharma P., Katrolia J., Rambach J., Mirbach B. and Stricker D. (2024). Achieving RGB-D Level Segmentation Performance from a Single ToF Camera. In Proceedings of the 13th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM; ISBN 978-989-758-684-2, SciTePress, pages 171-178. DOI: 10.5220/0012265100003654


in Bibtex Style

@conference{icpram24,
author={Pranav Sharma and Jigyasa Katrolia and Jason Rambach and Bruno Mirbach and Didier Stricker},
title={Achieving RGB-D Level Segmentation Performance from a Single ToF Camera},
booktitle={Proceedings of the 13th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM},
year={2024},
pages={171-178},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012265100003654},
isbn={978-989-758-684-2},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 13th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM
TI - Achieving RGB-D Level Segmentation Performance from a Single ToF Camera
SN - 978-989-758-684-2
AU - Sharma P.
AU - Katrolia J.
AU - Rambach J.
AU - Mirbach B.
AU - Stricker D.
PY - 2024
SP - 171
EP - 178
DO - 10.5220/0012265100003654
PB - SciTePress