loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Sheldon Schiffer 1 ; Samantha Zhang 2 and Max Levine 3

Affiliations: 1 Department of Computer Science, Occidental College, Los Angeles, California, U.S.A. ; 2 Department of Computer Science Cornell, University Ithaca, NY, U.S.A. ; 3 Department of Computer Science, University of North Carolina Asheville, Asheville, NC, U.S.A.

Keyword(s): Facial Emotion Corpora, Video Game Workflow, Non-player Characters, Video Games, Affective Computing, Emotion AI, NPCs, Neural Networks.

Abstract: The emergence of photorealistic and cinematic non-player character (NPC) animation presents new challenges for video game developers. Game player expectations of cinematic acting styles bring a more sophisticated aesthetic in the representation of social interaction. New methods can streamline workflow by integrating actor-driven character design into the development of game character AI and animation. A workflow that tracks actor performance to final neural network (NN) design depends on a rigorous method of producing single-actor video corpora from which to train emotion AI NN models. While numerous video corpora have been developed to study emotion elicitation of the face from which to test theoretical models and train neural networks to recognize emotion, developing single-actor corpora to train NNs of NPCs in video games is uncommon. A class of facial emotion recognition (FER) products have enabled production of single-actor video corpora that use emotion analysis data. This pap er introduces a single-actor game character corpora workflow for game character developers. The proposed method uses a single actor video corpus and dataset with the intent to train and implement a NN in an off-the-shelf video game engine for facial animation of an NPC. The efficacy of using a NN-driven animation controller has already been demonstrated (Schiffer, 2021, Kozasa et. al 2006). This paper focuses on using a single-actor video corpus for the purpose of training a NN-driven animation controller. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.188.252.23

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Schiffer, S.; Zhang, S. and Levine, M. (2022). Facial Emotion Expression Corpora for Training Game Character Neural Network Models. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - HUCAPP; ISBN 978-989-758-555-5; ISSN 2184-4321, SciTePress, pages 197-208. DOI: 10.5220/0010874700003124

@conference{hucapp22,
author={Sheldon Schiffer. and Samantha Zhang. and Max Levine.},
title={Facial Emotion Expression Corpora for Training Game Character Neural Network Models},
booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - HUCAPP},
year={2022},
pages={197-208},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010874700003124},
isbn={978-989-758-555-5},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - HUCAPP
TI - Facial Emotion Expression Corpora for Training Game Character Neural Network Models
SN - 978-989-758-555-5
IS - 2184-4321
AU - Schiffer, S.
AU - Zhang, S.
AU - Levine, M.
PY - 2022
SP - 197
EP - 208
DO - 10.5220/0010874700003124
PB - SciTePress