loading
Papers Papers/2020

Research.Publish.Connect.

Paper

Authors: Haruka Iesaki 1 ; Tsubasa Hirakawa 1 ; Takayoshi Yamashita 1 ; Hironobu Fujiyoshi 1 ; Yasunori Ishii 2 ; Kazuki Kozuka 2 and Ryota Fujimura 2

Affiliations: 1 Computer Science, Chubu University, 1200 Matsumoto-cho, Kasugai, Aichi, Japan ; 2 Panasonic Corporation, Japan

Keyword(s): Path Prediction, Visual Forecasting, Predictions by Dashcams, Convolutional LSTM.

Abstract: Autonomous cars need to understand the environment around it to avoid accidents. Moving objects like pedestrians and cyclists affect to the decisions of driving direction and behavior. And pedestrian is not always one-person. Therefore, we must know simultaneously how many people is in around environment. Thus, path prediction should be understanding the current state. For solving this problem, we propose path prediction method consider the moving context obtained by dashcams. Conventional methods receive the surrounding environment and positions, and output probability values. On the other hand, our approach predicts probabilistic paths by using visual information. Our method is an encoder-predictor model based on convolutional long short-term memory (ConvLSTM). ConvLSTM extracts visual information from object coordinates and images. We examine two types of images as input and two types of model. These images are related to people context, which is made from trimmed people’s positio ns and uncaptured background. Two types of model are recursively or not recursively decoder inputs. These models differ in decoder inputs because future images cannot obtain. Our results show visual context includes useful information and provides better prediction results than using only coordinates. Moreover, we show our method can easily extend to predict multi-person simultaneously. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 35.172.217.174

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Iesaki, H.; Hirakawa, T.; Yamashita, T.; Fujiyoshi, H.; Ishii, Y.; Kozuka, K. and Fujimura, R. (2020). Simultaneous Visual Context-aware Path Prediction. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, ISBN 978-989-758-402-2; ISSN 2184-4321, pages 741-748. DOI: 10.5220/0008921307410748

@conference{visapp20,
author={Haruka Iesaki. and Tsubasa Hirakawa. and Takayoshi Yamashita. and Hironobu Fujiyoshi. and Yasunori Ishii. and Kazuki Kozuka. and Ryota Fujimura.},
title={Simultaneous Visual Context-aware Path Prediction},
booktitle={Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,},
year={2020},
pages={741-748},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0008921307410748},
isbn={978-989-758-402-2},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,
TI - Simultaneous Visual Context-aware Path Prediction
SN - 978-989-758-402-2
IS - 2184-4321
AU - Iesaki, H.
AU - Hirakawa, T.
AU - Yamashita, T.
AU - Fujiyoshi, H.
AU - Ishii, Y.
AU - Kozuka, K.
AU - Fujimura, R.
PY - 2020
SP - 741
EP - 748
DO - 10.5220/0008921307410748

0123movie.net