loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Yuki Utsuro 1 ; Hidehiko Shishido 2 and Yoshinari Kameda 2

Affiliations: 1 Master’s Program in Intelligent and Mechanical Interaction Systems, University of Tsukuba, Tennodai 1-1-1, Tsukuba, Ibaraki, Japan ; 2 Center for Computational Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba, Ibaraki, Japan

Keyword(s): Action Classification, Action Recognition, Extended Skeleton Model, Pose Estimation, Pose Sequences, Computer Vision in Sports, Sumo, Japanese Wrestling.

Abstract: We propose a new method of classification for kimarites in sumo videos based on kinematic pose estimation. Japanese wrestling sumo is a combat sport. Sumo is played by two wrestlers wearing a mawashi, a loincloth fastened around the waist. In a sumo match, two wrestlers grapple with each other. Sumo wrestlers perform actions by grabbing their opponents’ mawashi. A kimarite is a sumo winning action that decides the outcome of a sumo match. All the kimarites are defined based on their motions. In an official sumo match, the kimarite of the match is classified by the referee, who oversees the classification just after the match. Classifying kimarites from videos by computer vision is a challenging task. There are two reasons. The first reason is that the definition of kimarites requires us to examine the relationship between the mawashi and the pose. The second reason is the heavy occlusion caused by the close contact between wrestlers. For the precise examination of pose estimation, we introduce a wrestler-specific skeleton model with mawashi keypoints. The relationship between mawashi and body parts is uniformly represented in the pose sequence with this extended skeleton model. As for heavy occlusion, we represent sumo actions as pose sequences to classify the sumo actions. Our method achieves an accuracy of 0.77 in action classification by LSTM. We confirmed that the skeleton model extension by mawashi keypoints improves the accuracy of action classification in sumo through the experiment results. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.138.69.172

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Utsuro, Y.; Shishido, H. and Kameda, Y. (2024). Sumo Action Classification Using Mawashi Keypoints. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP; ISBN 978-989-758-679-8; ISSN 2184-4321, SciTePress, pages 401-408. DOI: 10.5220/0012337000003660

@conference{visapp24,
author={Yuki Utsuro. and Hidehiko Shishido. and Yoshinari Kameda.},
title={Sumo Action Classification Using Mawashi Keypoints},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP},
year={2024},
pages={401-408},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012337000003660},
isbn={978-989-758-679-8},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP
TI - Sumo Action Classification Using Mawashi Keypoints
SN - 978-989-758-679-8
IS - 2184-4321
AU - Utsuro, Y.
AU - Shishido, H.
AU - Kameda, Y.
PY - 2024
SP - 401
EP - 408
DO - 10.5220/0012337000003660
PB - SciTePress