loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Diógenes Silva 1 ; João Lima 1 ; 2 ; Diego Thomas 3 ; Hideaki Uchiyama 4 and Veronica Teichrieb 1

Affiliations: 1 Voxar Labs, Centro de Informática, Universidade Federal de Pernambuco, Recife, PE, Brazil ; 2 Visual Computing Lab, Departamento de Computação, Universidade Federal Rural de Pernambuco, Recife, PE, Brazil ; 3 Faculty of Information Science and Electrical Engineering, Kyushu University ; 4 Graduate School of Science and Technology, Nara Institute of Science and Technology, Nara, Japan

Keyword(s): 3D Human Pose Estimation, Unsupervised Learning, Deep Learning, Reprojection Error.

Abstract: We present UMVpose++ to address the problem of 3D pose estimation of multiple persons in a multi-view scenario. Different from the most recent state-of-the-art methods, which are based on supervised techniques, our work does not need labeled data to perform 3D pose estimation. Furthermore, generating 3D annotations is costly and has a high probability of containing errors. Our approach uses a plane sweep method to generate the 3D pose estimation. We define one view as the target and the remainder as reference views. We estimate the depth of each 2D skeleton in the target view to obtain our 3D poses. Instead of comparing them with ground truth poses, we project the estimated 3D poses onto the reference views, and we compare the 2D projections with the 2D poses obtained using an off-the-shelf method. 2D poses of the same pedestrian obtained from the target and reference views must be matched to allow comparison. By performing a matching process based on ground points, we ident ify the corresponding 2D poses and compare them with our respective projections. Furthermore, we propose a new reprojection loss based on the smooth L1 norm. We evaluated our proposed method on the publicly available Campus dataset. As a result, we obtained better accuracy than state-of-the-art unsupervised methods, achieving 0.5% points above the best geometric method. Furthermore, we outperform some state-of-the-art supervised methods, and our results are comparable with the best-supervised method, achieving only 0.2% points below. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.135.200.211

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Silva, D.; Lima, J.; Thomas, D.; Uchiyama, H. and Teichrieb, V. (2023). UMVpose++: Unsupervised Multi-View Multi-Person 3D Pose Estimation Using Ground Point Matching. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 607-614. DOI: 10.5220/0011668800003417

@conference{visapp23,
author={Diógenes Silva. and João Lima. and Diego Thomas. and Hideaki Uchiyama. and Veronica Teichrieb.},
title={UMVpose++: Unsupervised Multi-View Multi-Person 3D Pose Estimation Using Ground Point Matching},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP},
year={2023},
pages={607-614},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011668800003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP
TI - UMVpose++: Unsupervised Multi-View Multi-Person 3D Pose Estimation Using Ground Point Matching
SN - 978-989-758-634-7
IS - 2184-4321
AU - Silva, D.
AU - Lima, J.
AU - Thomas, D.
AU - Uchiyama, H.
AU - Teichrieb, V.
PY - 2023
SP - 607
EP - 614
DO - 10.5220/0011668800003417
PB - SciTePress