Using Syntactic Similarity to Shorten the Training Time of Deep Learning Models using Time Series Datasets: A Case Study

Silvestre Malta, Silvestre Malta, Pedro Pinto, Pedro Pinto, Manuel Veiga

2021

Abstract

The process of building and deploying Machine Learning (ML) models includes several phases and the training phase is taken as one of the most time-consuming. ML models with time series datasets can be used to predict users positions, behaviours or mobility patterns, which implies paths crossing by well-defined positions, and thus, in these cases, syntactic similarity can be used to reduce these models training time. This paper uses the case study of a Mobile Network Operator (MNO) where users mobility are predicted through ML and the use of syntactic similarity with Word2Vec (W2V) framework is tested with Recurrent Neural Network (RNN), Gate Recurrent Unit (GRU), Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) models. Experimental results show that by using framework W2V in these architectures, the training time task is reduced in average between 22% to 43%. Also an improvement on the validation accuracy of mobility prediction of about 3 percentage points in average is obtained.

Download


Paper Citation


in Harvard Style

Malta S., Pinto P. and Veiga M. (2021). Using Syntactic Similarity to Shorten the Training Time of Deep Learning Models using Time Series Datasets: A Case Study. In Proceedings of the 2nd International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA, ISBN 978-989-758-526-5, pages 93-100. DOI: 10.5220/0010515700930100


in Bibtex Style

@conference{delta21,
author={Silvestre Malta and Pedro Pinto and Manuel Veiga},
title={Using Syntactic Similarity to Shorten the Training Time of Deep Learning Models using Time Series Datasets: A Case Study},
booktitle={Proceedings of the 2nd International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA,},
year={2021},
pages={93-100},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010515700930100},
isbn={978-989-758-526-5},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 2nd International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA,
TI - Using Syntactic Similarity to Shorten the Training Time of Deep Learning Models using Time Series Datasets: A Case Study
SN - 978-989-758-526-5
AU - Malta S.
AU - Pinto P.
AU - Veiga M.
PY - 2021
SP - 93
EP - 100
DO - 10.5220/0010515700930100