Facial Emotion Expression Corpora for Training Game Character Neural Network Models

Sheldon Schiffer, Samantha Zhang, Max Levine

2022

Abstract

The emergence of photorealistic and cinematic non-player character (NPC) animation presents new challenges for video game developers. Game player expectations of cinematic acting styles bring a more sophisticated aesthetic in the representation of social interaction. New methods can streamline workflow by integrating actor-driven character design into the development of game character AI and animation. A workflow that tracks actor performance to final neural network (NN) design depends on a rigorous method of producing single-actor video corpora from which to train emotion AI NN models. While numerous video corpora have been developed to study emotion elicitation of the face from which to test theoretical models and train neural networks to recognize emotion, developing single-actor corpora to train NNs of NPCs in video games is uncommon. A class of facial emotion recognition (FER) products have enabled production of single-actor video corpora that use emotion analysis data. This paper introduces a single-actor game character corpora workflow for game character developers. The proposed method uses a single actor video corpus and dataset with the intent to train and implement a NN in an off-the-shelf video game engine for facial animation of an NPC. The efficacy of using a NN-driven animation controller has already been demonstrated (Schiffer, 2021, Kozasa et. al 2006). This paper focuses on using a single-actor video corpus for the purpose of training a NN-driven animation controller.

Download


Paper Citation


in Harvard Style

Schiffer S., Zhang S. and Levine M. (2022). Facial Emotion Expression Corpora for Training Game Character Neural Network Models. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: HUCAPP, ISBN 978-989-758-555-5, pages 197-208. DOI: 10.5220/0010874700003124


in Bibtex Style

@conference{hucapp22,
author={Sheldon Schiffer and Samantha Zhang and Max Levine},
title={Facial Emotion Expression Corpora for Training Game Character Neural Network Models},
booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: HUCAPP,},
year={2022},
pages={197-208},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010874700003124},
isbn={978-989-758-555-5},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: HUCAPP,
TI - Facial Emotion Expression Corpora for Training Game Character Neural Network Models
SN - 978-989-758-555-5
AU - Schiffer S.
AU - Zhang S.
AU - Levine M.
PY - 2022
SP - 197
EP - 208
DO - 10.5220/0010874700003124