loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Vuong Nguyen ; Samiha Mirza ; Pranav Mantini and Shishir Shah

Affiliation: Quantitative Imaging Lab, Dept. of Computer Science, University of Houston, Houston, Texas, U.S.A.

Keyword(s): Video-Based Person Re-Identification, Cloth-Changing Person Re-Identification, Gait Recognition, Graph Attention Networks, Spatial-Temporal Graph Learning.

Abstract: Current state-of-the-art Video-based Person Re-Identification (Re-ID) primarily relies on appearance features extracted by deep learning models. These methods are not applicable for long-term analysis in real-world scenarios where persons have changed clothes, making appearance information unreliable. In this work, we deal with the practical problem of Video-based Cloth-Changing Person Re-ID (VCCRe-ID) by proposing “Attention-based Shape and Gait Representations Learning” (ASGL) for VCCRe-ID. Our ASGL framework improves Re-ID performance under clothing variations by learning clothing-invariant gait cues using a Spatial-Temporal Graph Attention Network (ST-GAT). Given the 3D-skeleton-based spatial-temporal graph, our proposed ST-GAT comprises multi-head attention modules, which are able to enhance the robustness of gait embeddings under viewpoint changes and occlusions. The ST-GAT amplifies the important motion ranges and reduces the influence of noisy poses. Then, the multi-head lear ning module effectively reserves beneficial local temporal dynamics of movement. We also boost discriminative power of person representations by learning body shape cues using a GAT. Experiments on two large-scale VCCRe-ID datasets demonstrate that our proposed framework outperforms state-of-the-art methods by 12.2% in rank-1 accuracy and 7.0% in mAP. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.227.111.192

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Nguyen, V.; Mirza, S.; Mantini, P. and Shah, S. (2024). Attention-Based Shape and Gait Representations Learning for Video-Based Cloth-Changing Person Re-Identification. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP; ISBN 978-989-758-679-8; ISSN 2184-4321, SciTePress, pages 80-89. DOI: 10.5220/0012315900003660

@conference{visapp24,
author={Vuong Nguyen. and Samiha Mirza. and Pranav Mantini. and Shishir Shah.},
title={Attention-Based Shape and Gait Representations Learning for Video-Based Cloth-Changing Person Re-Identification},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP},
year={2024},
pages={80-89},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012315900003660},
isbn={978-989-758-679-8},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP
TI - Attention-Based Shape and Gait Representations Learning for Video-Based Cloth-Changing Person Re-Identification
SN - 978-989-758-679-8
IS - 2184-4321
AU - Nguyen, V.
AU - Mirza, S.
AU - Mantini, P.
AU - Shah, S.
PY - 2024
SP - 80
EP - 89
DO - 10.5220/0012315900003660
PB - SciTePress