loading
Papers Papers/2020

Research.Publish.Connect.

Paper

Authors: Dominik Borer 1 ; 2 ; Lu Yuhang 1 ; Laura Wülfroth 1 ; Jakob Buhmann 2 and Martin Guay 2

Affiliations: 1 Computer Graphics Laboratory, ETH Zürich, Switzerland ; 2 Disney Research Studios, Zürich, Switzerland

Keyword(s): Asset Re-use, Neural Rendering, Real-time Rendering.

Abstract: Movie productions use high resolution 3d characters with complex proprietary rigs to create the highest quality images possible for large displays. Unfortunately, these 3d assets are typically not compatible with real-time graphics engines used for games, mixed reality and real-time pre-visualization. Consequently, the 3d characters need to be re-modeled and re-rigged for these new applications, requiring weeks of work and artistic approval. Our solution to this problem is to learn a compact image-based rendering of the original 3d character, conditioned directly on the rig parameters. Our idea is to render the character in many different poses and views, and to train a deep neural network to render high resolution images, from the rig parameters directly. Many neural rendering techniques have been proposed to render from 2d skeletons, or geometry and UV maps. However these require additional steps to create the input structure (e.g. a low res mesh), often hold ambiguities between fr ont and back (e.g. 2d skeletons) and most importantly, do not preserve the animator’s workflow of manipulating specific type of rigs, as well as the real-time game engine pipeline of interpolating rig parameters. In contrast, our model learns to render an image directly from the rig parameters at a high resolution. We extend our architecture to support dynamic re-lighting and composition with other objects in the scene. By generating normals, depth, albedo and a mask, we can produce occlusion depth tests and lighting effects through the normals. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.238.204.31

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Borer, D.; Yuhang, L.; Wülfroth, L.; Buhmann, J. and Guay, M. (2021). Rig-space Neural Rendering: Compressing the Rendering of Characters for Previs, Real-time Animation and High-quality Asset Re-use. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP, ISBN 978-989-758-488-6; ISSN 2184-4321, pages 300-307. DOI: 10.5220/0010334503000307

@conference{grapp21,
author={Dominik Borer. and Lu Yuhang. and Laura Wülfroth. and Jakob Buhmann. and Martin Guay.},
title={Rig-space Neural Rendering: Compressing the Rendering of Characters for Previs, Real-time Animation and High-quality Asset Re-use},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP,},
year={2021},
pages={300-307},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010334503000307},
isbn={978-989-758-488-6},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP,
TI - Rig-space Neural Rendering: Compressing the Rendering of Characters for Previs, Real-time Animation and High-quality Asset Re-use
SN - 978-989-758-488-6
IS - 2184-4321
AU - Borer, D.
AU - Yuhang, L.
AU - Wülfroth, L.
AU - Buhmann, J.
AU - Guay, M.
PY - 2021
SP - 300
EP - 307
DO - 10.5220/0010334503000307

0123movie.net