loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Nicolas Ugrinovic ; Albert Pumarola ; Alberto Sanfeliu and Francesc Moreno-Noguer

Affiliation: Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Barcelona, Spain

Keyword(s): 3D Human Reconstruction, Augmented/virtual Really, Deep Networks.

Abstract: Recent advances in 3D human shape reconstruction from single images have shown impressive results, leveraging on deep networks that model the so-called implicit function to learn the occupancy status of arbitrarily dense 3D points in space. However, while current algorithms based on this paradigm, like PiFuHD (Saito et al., 2020), are able to estimate accurate geometry of the human shape and clothes, they require high-resolution input images and are not able to capture complex body poses. Most training and evaluation is performed on 1k-resolution images of humans standing in front of the camera under neutral body poses. In this paper, we leverage publicly available data to extend existing implicit function-based models to deal with images of humans that can have arbitrary poses and self-occluded limbs. We argue that the representation power of the implicit function is not sufficient to simultaneously model details of the geometry and of the body pose. We, therefore, propose a coarse- to-fine approach in which we first learn an implicit function that maps the input image to a 3D body shape with a low level of detail, but which correctly fits the underlying human pose, despite its complexity. We then learn a displacement map, conditioned on the smoothed surface and on the input image, which encodes the high-frequency details of the clothes and body. In the experimental section, we show that this coarse-to-fine strategy represents a very good trade-off between shape detail and pose correctness, comparing favorably to the most recent state-of-the-art approaches. Our code will be made publicly available. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.217.203.172

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Ugrinovic, N.; Pumarola, A.; Sanfeliu, A. and Moreno-Noguer, F. (2022). Single-view 3D Body and Cloth Reconstruction under Complex Poses. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP; ISBN 978-989-758-555-5; ISSN 2184-4321, SciTePress, pages 192-203. DOI: 10.5220/0010896100003124

@conference{visapp22,
author={Nicolas Ugrinovic. and Albert Pumarola. and Alberto Sanfeliu. and Francesc Moreno{-}Noguer.},
title={Single-view 3D Body and Cloth Reconstruction under Complex Poses},
booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP},
year={2022},
pages={192-203},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010896100003124},
isbn={978-989-758-555-5},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP
TI - Single-view 3D Body and Cloth Reconstruction under Complex Poses
SN - 978-989-758-555-5
IS - 2184-4321
AU - Ugrinovic, N.
AU - Pumarola, A.
AU - Sanfeliu, A.
AU - Moreno-Noguer, F.
PY - 2022
SP - 192
EP - 203
DO - 10.5220/0010896100003124
PB - SciTePress