loading
Papers Papers/2020

Research.Publish.Connect.

Paper

Authors: Arturo Fuentes 1 ; 2 ; F. Sánchez 1 ; Thomas Voncina 2 and Jorge Bernal 1

Affiliations: 1 Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona, Bellaterra (Cerdanyola del Vallès), 08193, Barcelona, Spain ; 2 Lang Iberia, Carrer Can Pobla, 3, 08202, Sabadell, Spain

ISBN: 978-989-758-488-6

ISSN: 2184-4321

Keyword(s): Object Detection, Saliency Map, Broadcast Automation, Spatio-temporal Texture Analysis.

Abstract: The advent of artificial intelligence has supposed an evolution on how different daily work tasks are performed. The analysis of cultural content has seen a huge boost by the development of computer-assisted methods that allows easy and transparent data access. In our case, we deal with the automation of the production of live shows, like music concerts, aiming to develop a system that can indicate the producer which camera to show based on what each of them is showing. In this context, we consider that is essential to understand where spectators look and what they are interested in so the computational method can learn from this information. The work that we present here shows the results of a first preliminary study in which we compare areas of interest defined by human beings and those indicated by an automatic system. Our system is based on the extraction of motion textures from dynamic Spatio-Temporal Volumes (STV) and then analyzing the patterns by means of texture analysis tec hniques. We validate our approach over several video sequences that have been labeled by 16 different experts. Our method is able to match those relevant areas identified by the experts, achieving recall scores higher than 80% when a distance of 80 pixels between method and ground truth is considered. Current performance shows promise when detecting abnormal peaks and movement trends. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.229.142.104

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Fuentes, A.; Sánchez, F.; Voncina, T. and Bernal, J. (2021). LAMV: Learning to Predict Where Spectators Look in Live Music Performances. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, ISBN 978-989-758-488-6; ISSN 2184-4321, pages 500-507. DOI: 10.5220/0010254005000507

@conference{visapp21,
author={Arturo Fuentes. and F. Sánchez. and Thomas Voncina. and Jorge Bernal.},
title={LAMV: Learning to Predict Where Spectators Look in Live Music Performances},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,},
year={2021},
pages={500-507},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010254005000507},
isbn={978-989-758-488-6},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,
TI - LAMV: Learning to Predict Where Spectators Look in Live Music Performances
SN - 978-989-758-488-6
IS - 2184-4321
AU - Fuentes, A.
AU - Sánchez, F.
AU - Voncina, T.
AU - Bernal, J.
PY - 2021
SP - 500
EP - 507
DO - 10.5220/0010254005000507

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.
0123movie.net