Visual Feedback System for Intuitive Comprehension of Self-movement and Sensor Data for Effective Motor Learning

Dan Mikami, Ayumi Matsumoto, Toshitaka Kimura, Shiro Ozawa, Akira Kojima

2014

Abstract

Information feedback systems for motor learning have been widely studied. Means of providing feedback can be divided into two approaches: auditory and visual. Audio information can rovide feedback without preventing training motions a trainee makes when moving (Effenberg et al., 2011). However, due to the intrinsic feature of sound, i.e., that is onedimensional temporal data, the information it can express is quite limited. Visual feedback has also been widely studied (Guadagnoli et al., 2002; Wieringen et al., 1989). Feedback of this type can provide a great deal of information through the use of visual information. For example, Chua et al. have developed a training system in a VR environment (Chua et al., 2003). The system uses a motion capturing technique to capture a trainee’s movements and shows the corresponding trainer’s movements. Choi et al., have proposed a system that estimates motion proficiency on the basis of motion capture data (Choi et al., 2008). However, though visual information may enhance motor learning efficacy, there are two problems that make it difficult for most existing visual feedback systems to be used in practice. One problem is in setting. The aforementioned systems employ motion capture techniques to obtain human movement. The overhead for setting mocap systems and training site restrictions deteriorate the systems’ efficacy. The other problem is in the timing of visual feedbacks. The simplest visual feedback system is training in front of a mirror. In this case, the trainee has to get visual feedback while he or she is moving, which disrupts practice. Another simple visual feedback system is capturing and watching a video. In this case, the temporal gap between capturing and watching gets longer, and this degrades feedback efficacy. In recent years, small sensors have been developed that enable information of various types such as surface electromyography (EMG), cardiac rate, and respiration rate to be captured with only a small amount of interventions required on the part of trainees. These can be used as additional information for motor learning feedback. Here, we should note that a considerable amount of information does not always result in effective motor learning; in fact, too much information may well disturb motor learning efficacy. We aim at providing visual feedback of a trainee’s movements for effective motor learning. This paper describes a new visual feedback method we propose with this aim in mind. It has three main features: (1) automatic temporal synchronization of trainer and trainee motions, (2) intuitive presentation of sensor data, e.g. surface electromyography (EMG) and cardiac rate, based on the position of the equipped sensor, and (3) an absence of restrictions on clothing and on illumination conditions.

References

  1. Bobick, A. and Davis, J. (2001). The representation and recognition of action using temporal templates. IEEE Trans. PAMI, 23(3).
  2. Choi, W., Mukaida, S., Sekiguchi, H., and Hachimura, K. (2008). Qunatitative analysis of iaido proficiency by using motion data. In ICPR.
  3. Chua, P., Crivella, R., Daly, B., Hu, N., Schaaf, R., Ventura, D., Camil, T., Hodgins, J., and Paush, R. (2003). Training for physical tasks in virtual environments: Tai chi. In IEEE VR.
  4. Effenberg, A., Fehse, U., and Weber, A. (2011). Movement sonification: Audiovisual benefits on motor learning. In The International Conference SKILLS.
  5. Guadagnoli, M., Holcomb, W., and Davis, M. (2002). The efficacy of video feedback for learning the golf swing. Journal of Sports Science, 20:615-622.
  6. Wieringen, P. V., Emmen, H., Bootsma, R., Hoogesteger, M., and Whiting, H. (1989). The effect of videofeedback on the learning of the tennis service by intermediate players. Journal of Sports Science, 7:156-162.
Download


Paper Citation


in Harvard Style

Mikami D., Matsumoto A., Kimura T., Ozawa S. and Kojima A. (2014). Visual Feedback System for Intuitive Comprehension of Self-movement and Sensor Data for Effective Motor Learning . In - icSPORTS, ISBN , pages 0-0


in Bibtex Style

@conference{icsports14,
author={Dan Mikami and Ayumi Matsumoto and Toshitaka Kimura and Shiro Ozawa and Akira Kojima},
title={Visual Feedback System for Intuitive Comprehension of Self-movement and Sensor Data for Effective Motor Learning},
booktitle={ - icSPORTS,},
year={2014},
pages={},
publisher={SciTePress},
organization={INSTICC},
doi={},
isbn={},
}


in EndNote Style

TY - CONF
JO - - icSPORTS,
TI - Visual Feedback System for Intuitive Comprehension of Self-movement and Sensor Data for Effective Motor Learning
SN -
AU - Mikami D.
AU - Matsumoto A.
AU - Kimura T.
AU - Ozawa S.
AU - Kojima A.
PY - 2014
SP - 0
EP - 0
DO -