loading
Papers Papers/2020

Research.Publish.Connect.

Paper

Authors: Decky Aspandi 1 ; 2 ; Federico Sukno 2 ; Björn Schuller 1 ; 3 and Xavier Binefa 2

Affiliations: 1 Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany ; 2 Department of Information and Communication Technologies, Pompeu Fabra University, Barcelona, Spain ; 3 GLAM – Group on Language, Audio, & Music, Imperial College London, U.K.

Keyword(s): Affective Computing, Temporal Modelling, Adversarial Learning.

Abstract: Affective Computing has recently attracted the attention of the research community, due to its numerous applications in diverse areas. In this context, the emergence of video-based data allows to enrich the widely used spatial features with the inclusion of temporal information. However, such spatio-temporal modelling often results in very high-dimensional feature spaces and large volumes of data, making training difficult and time consuming. This paper addresses these shortcomings by proposing a novel model that efficiently extracts both spatial and temporal features of the data by means of its enhanced temporal modelling based on latent features. Our proposed model consists of three major networks, coined Generator, Discriminator, and Combiner, which are trained in an adversarial setting combined with curriculum learning to enable our adaptive attention modules. In our experiments, we show the effectiveness of our approach by reporting our competitive results on both the AFEW-VA an d SEWA datasets, suggesting that temporal modelling improves the affect estimates both in qualitative and quantitative terms. Furthermore, we find that the inclusion of attention mechanisms leads to the highest accuracy improvements, as its weights seem to correlate well with the appearance of facial movements, both in terms of temporal localisation and intensity. Finally, we observe the sequence length of around 160 ms to be the optimum one for temporal modelling, which is consistent with other relevant findings utilising similar lengths. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.238.132.225

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Aspandi, D.; Sukno, F.; Schuller, B. and Binefa, X. (2021). An Enhanced Adversarial Network with Combined Latent Features for Spatio-temporal Facial Affect Estimation in the Wild. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, ISBN 978-989-758-488-6; ISSN 2184-4321, pages 172-181. DOI: 10.5220/0010332001720181

@conference{visapp21,
author={Decky Aspandi. and Federico Sukno. and Björn Schuller. and Xavier Binefa.},
title={An Enhanced Adversarial Network with Combined Latent Features for Spatio-temporal Facial Affect Estimation in the Wild},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,},
year={2021},
pages={172-181},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010332001720181},
isbn={978-989-758-488-6},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP,
TI - An Enhanced Adversarial Network with Combined Latent Features for Spatio-temporal Facial Affect Estimation in the Wild
SN - 978-989-758-488-6
IS - 2184-4321
AU - Aspandi, D.
AU - Sukno, F.
AU - Schuller, B.
AU - Binefa, X.
PY - 2021
SP - 172
EP - 181
DO - 10.5220/0010332001720181

0123movie.net