loading
Documents

Research.Publish.Connect.

Paper

Authors: Melani Sanchez-Garcia 1 ; Ruben Martinez-Cantin 2 and Jose Guerrero 1

Affiliations: 1 I3A, Universidad de Zaragoza and Spain ; 2 I3A, Universidad de Zaragoza, Spain, Centro Universitario de la Defensa, Zaragoza and Spain

ISBN: 978-989-758-354-4

Keyword(s): Image Understanding, Fully Convolutional Network, Visual Prosthesis, Simulated Prosthetic Vision.

Abstract: One of the biggest problems for blind people is to recognize environments. Prosthetic Vision is a promising new technology to provide visual perception to people with some kind of blindness, transforming an image to a phosphenes pattern to be sent to the implant. However, current prosthetic implants have limited ability to generate images with detail required for understanding an environment. Computer vision play a key role in providing prosthetic vision to alleviate key restrictions of blindness. In this work, we propose a new approach to build a schematic representation of indoor environments for phosphene images. We combine computer vision and deep learning techniques to extract structural features in a scene and recognize different indoor environments designed to prosthetic vision. Our method uses the extraction of structural informative edges which can underpin many computer vision tasks such as recognition and scene understanding, being key for conveying the scene structure. We also apply an object detection algorithm by using an accurate machine learning model capable of localizing and identifying multiple objects in a single image. Further, we represent the extracted information using a phosphenes pattern. The effectiveness of this approach is tested with real data from indoor environments with eleven volunteers. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.206.16.123

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Sanchez-Garcia, M.; Martinez-Cantin, R. and Guerrero, J. (2019). Indoor Scenes Understanding for Visual Prosthesis with Fully Convolutional Networks.In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, ISBN 978-989-758-354-4, pages 218-225. DOI: 10.5220/0007257602180225

@conference{visapp19,
author={Melani Sanchez{-}Garcia. and Ruben Martinez{-}Cantin. and Jose J. Guerrero.},
title={Indoor Scenes Understanding for Visual Prosthesis with Fully Convolutional Networks},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,},
year={2019},
pages={218-225},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007257602180225},
isbn={978-989-758-354-4},
}

TY - CONF

JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,
TI - Indoor Scenes Understanding for Visual Prosthesis with Fully Convolutional Networks
SN - 978-989-758-354-4
AU - Sanchez-Garcia, M.
AU - Martinez-Cantin, R.
AU - Guerrero, J.
PY - 2019
SP - 218
EP - 225
DO - 10.5220/0007257602180225

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.