Multi-sensor Data Fusion for Wearable Devices

Tiziana Rotondo

2018

Abstract

The real time information comes from multiple sources such as wearable sensors, audio signals, GPS, etc. The idea of multi-sensor data fusion is to combine the data coming from different sensors to provide more accurate information than that a single sensor alone. To contribute to ongoing research in this area, the goal of my research is to build a shared representation between data coming from different domains, such as images, signal audio, heart rate, acceleration, etc., in order to predict daily activities. In the state of the art, these arguments are treated individually. Many papers, such as (Lan et al., 2014; Ma et al., 2016) et al., predict daily activity from video or static image. Others, such as (Ngiam et al., 2011; Srivastava and Salakhutdinov, 2014) et al., build a shared representation then rebuild the inputs or rebuild a missing modality, or (Nakamura et al., 2017) classifies from multimodal data.

Download


Paper Citation


in Harvard Style

Rotondo T. (2018). Multi-sensor Data Fusion for Wearable Devices.In Doctoral Consortium - DCETE, ISBN , pages 22-28


in Bibtex Style

@conference{dcete18,
author={Tiziana Rotondo},
title={Multi-sensor Data Fusion for Wearable Devices},
booktitle={Doctoral Consortium - DCETE,},
year={2018},
pages={22-28},
publisher={SciTePress},
organization={INSTICC},
doi={},
isbn={},
}


in EndNote Style

TY - CONF

JO - Doctoral Consortium - DCETE,
TI - Multi-sensor Data Fusion for Wearable Devices
SN -
AU - Rotondo T.
PY - 2018
SP - 22
EP - 28
DO -