loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Djadja Djadja ; Ludovic Hamon and Sébastien George

Affiliation: LIUM, Le Mans University, Le Mans, France

Keyword(s): Virtual Reality, Pedagogical Feedback, Design, Motion Learning.

Abstract: This paper addresses the problem of creation and re-usability of pedagogical feedbacks, in Virtual Learning Environments (VLE), adapted to the needs of teachers for gesture learning. One of the main strengths of VLE is their ability to provide multimodal (i.e. visual, haptic, audio, etc.) feedbacks to help the learners in evaluating their skills, the task progress or its good execution. The feedback design strongly depends on the VLE and the pedagogical strategy. In addition, past studies mainly focus on the impact of the feedback modality on the learning situation, without considering others design elements (e.g. triggering rules, features of the motion to learn, etc.). However, most existing gesture-based VLEs are not editable without IT knowledge and therefore, failes in considering the evolution of pedagogical strategies. Consequently, this paper presents the GEstural FEedback EDitor (GEFEED) allowing non-IT teachers to create their multimodal and pedagogical feedbacks into any V LE developed under Unity3D. This editor operationalises a three dimensional descriptive model (i.e. feedback virtual representation, its triggering rules, involved 3D objects) of a pedagogical feedback dedicated to gesture leaning. Five types of feedbacks are proposed (i.e. visual color or text, audio from a file or a text and haptic vibration) and can be associated with four kinds of triggers (i.e. time, contact between objects, static spatial configuration, motion metric). In the context of a dilution task in biology, an experimental study is conducted in which teachers generate their feedbacks according to pre-defined or chosen pedagogical objectives. The results mainly show : (a) the acceptance of GEEFED and the underlying model and (b), the most used types of modalities (i.e. visual color, vibration, audio from text), triggering rules (i.e. motion metric, spatial configuration and contact) and (c), the teacher satisfaction in reaching their pedagogical objectives. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 34.239.158.223

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Djadja, D.; Hamon, L. and George, S. (2023). A 3D Descriptive Model for Designing Multimodal Feedbacks in any Virtual Environment for Gesture Learning. In Proceedings of the 18th International Conference on Software Technologies - ICSOFT; ISBN 978-989-758-665-1; ISSN 2184-2833, SciTePress, pages 84-95. DOI: 10.5220/0012081000003538

@conference{icsoft23,
author={Djadja Djadja. and Ludovic Hamon. and Sébastien George.},
title={A 3D Descriptive Model for Designing Multimodal Feedbacks in any Virtual Environment for Gesture Learning},
booktitle={Proceedings of the 18th International Conference on Software Technologies - ICSOFT},
year={2023},
pages={84-95},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012081000003538},
isbn={978-989-758-665-1},
issn={2184-2833},
}

TY - CONF

JO - Proceedings of the 18th International Conference on Software Technologies - ICSOFT
TI - A 3D Descriptive Model for Designing Multimodal Feedbacks in any Virtual Environment for Gesture Learning
SN - 978-989-758-665-1
IS - 2184-2833
AU - Djadja, D.
AU - Hamon, L.
AU - George, S.
PY - 2023
SP - 84
EP - 95
DO - 10.5220/0012081000003538
PB - SciTePress