loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Tiziana Rotondo 1 ; Giovanni Maria Farinella 2 ; Valeria Tomaselli 3 and Sebastiano Battiato 2

Affiliations: 1 Department of Mathematics and Computer Science, University of Catania and Italy ; 2 Department of Mathematics and Computer Science, University of Catania, Italy, ICAR-CNR, Palermo and Italy ; 3 STMicroelectronics, Catania and Italy

Keyword(s): Action Anticipation, Multimodal Learning, Siamese Network.

Related Ontology Subjects/Areas/Topics: Computer Vision, Visualization and Computer Graphics ; Image Formation and Preprocessing ; Image Formation, Acquisition Devices and Sensors ; Multimodal and Multi-Sensor Models of Image Formation

Abstract: The idea of multi-sensor data fusion is to combine the data coming from different sensors to provide more accurate and complementary information to solve a specific task. Our goal is to build a shared representation related to data coming from different domains, such as images, audio signal, heart rate, acceleration, etc., in order to anticipate daily activities of a user wearing multimodal sensors. To this aim, we consider the Stanford-ECM Dataset which contains syncronized data acquired with different sensors: video, acceleration and heart rate signals. The dataset is adapted to our action prediction task by identifying the transitions from the generic “Unknown” class to a specific “Activity”. We discuss and compare a Siamese Network with the Multi Layer Perceptron and the 1D CNN where the input is an unknown observation and the output is the next activity to be observed. The feature representations obtained with the considered deep architecture are classified with SVM or KNN class ifiers. Experimental results pointed out that prediction from multimodal data seems a feasible task, suggesting that multimodality improves both classification and prediction. Nevertheless, the task of reliably predicting next actions is still open and requires more investigations as well as the availability of multimodal dataset, specifically built for prediction purposes. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.144.109.5

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Rotondo, T.; Farinella, G.; Tomaselli, V. and Battiato, S. (2019). Action Anticipation from Multimodal Data. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 4: VISAPP; ISBN 978-989-758-354-4; ISSN 2184-4321, SciTePress, pages 154-161. DOI: 10.5220/0007379001540161

@conference{visapp19,
author={Tiziana Rotondo. and Giovanni Maria Farinella. and Valeria Tomaselli. and Sebastiano Battiato.},
title={Action Anticipation from Multimodal Data},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 4: VISAPP},
year={2019},
pages={154-161},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007379001540161},
isbn={978-989-758-354-4},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 4: VISAPP
TI - Action Anticipation from Multimodal Data
SN - 978-989-758-354-4
IS - 2184-4321
AU - Rotondo, T.
AU - Farinella, G.
AU - Tomaselli, V.
AU - Battiato, S.
PY - 2019
SP - 154
EP - 161
DO - 10.5220/0007379001540161
PB - SciTePress