loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Sridhar Poosapadi Arjunan 1 ; Hans Weghorn 2 ; Dinesh Kant Kumar 1 and Wai Chee Yau 1

Affiliations: 1 School of Electrical and Computer Engineering, RMIT University, Australia ; 2 Information technology, BA-University of Cooperative Education, Germany

Keyword(s): HCI, Speech Command, Facial Surface Electromyogram, Artificial Neural Network, Bilingual variation.

Related Ontology Subjects/Areas/Topics: Accessibility to Disabled Users ; Computer-Supported Education ; Enterprise Information Systems ; Human Factors ; Human-Computer Interaction ; Machine Perception: Vision, Speech, Other ; Physiological Computing Systems ; Ubiquitous Learning ; User Needs

Abstract: This research examines the use of fSEMG (facial Surface Electromyogram) to recognise speech commands in English and German language without evaluating any voice signals. The system is designed for applications based on speech commands for Human Computer Interaction (HCI). An effective technique is presented, which uses the facial muscle activity of the articulatory muscles and human factors for silent vowel recognition. The difference in the speed and style of speaking varies between experiments, and this variation appears to be more pronounced when people are speaking a different language other than their native language. This investigation reports measuring the relative activity of the articulatory muscles for recognition of silent vowels of German (native) and English (foreign) languages. In this analysis, three English vowels and three German vowels were used as recognition variables. The moving root mean square (RMS) of surface electromyogram (SEMG) of four facial muscles is use d to segment the signal and to identify the start and end of a silently spoken utterance. The relative muscle activity is computed by integrating and normalising the RMS values of the signals between the detected start and end markers. The output vector of this is classified using a back propagation neural network to identify the voiceless speech. The cross-validation was performed to test the reliability of the classification. The data is also tested using K-means clustering technique to determine the linearity of separation of the data. The experimental results show that this technique yields high recognition rate when used for all participants in both languages. The results also show that the system is easy to train for a new user and suggest that such a system works reliably for simple vowel based commands for human computer interface when it is trained for a user, who can speak one or more languages and for people who have speech disability. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.97.9.169

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Poosapadi Arjunan, S. ; Weghorn, H. ; Kant Kumar, D. and Chee Yau, W. (2007). SILENT BILINGUAL VOWEL RECOGNITION - Using fSEMG for HCI based Speech Commands. In Proceedings of the Ninth International Conference on Enterprise Information Systems - Volume 4: ICEIS; ISBN 978-972-8865-92-4; ISSN 2184-4992, SciTePress, pages 68-75. DOI: 10.5220/0002365400680075

@conference{iceis07,
author={Sridhar {Poosapadi Arjunan} and Hans Weghorn and Dinesh {Kant Kumar} and Wai {Chee Yau}},
title={SILENT BILINGUAL VOWEL RECOGNITION - Using fSEMG for HCI based Speech Commands},
booktitle={Proceedings of the Ninth International Conference on Enterprise Information Systems - Volume 4: ICEIS},
year={2007},
pages={68-75},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0002365400680075},
isbn={978-972-8865-92-4},
issn={2184-4992},
}

TY - CONF

JO - Proceedings of the Ninth International Conference on Enterprise Information Systems - Volume 4: ICEIS
TI - SILENT BILINGUAL VOWEL RECOGNITION - Using fSEMG for HCI based Speech Commands
SN - 978-972-8865-92-4
IS - 2184-4992
AU - Poosapadi Arjunan, S.
AU - Weghorn, H.
AU - Kant Kumar, D.
AU - Chee Yau, W.
PY - 2007
SP - 68
EP - 75
DO - 10.5220/0002365400680075
PB - SciTePress