Authors:
Salah Werda
;
Walid Mahdi
and
Abdelmajid Ben Hamadou
Affiliation:
MIRACL: Multimedia Information systems and Advanced Computing Laboratory, Higher Institute of Computer Science and Multimedia, Tunisia
Keyword(s):
Human-Machine interaction, Visual information, Lip-reading system, Spatial-temporal tracking.
Related
Ontology
Subjects/Areas/Topics:
Accessibility to Disabled Users
;
Computer-Supported Education
;
Enterprise Information Systems
;
HCI on Enterprise Information Systems
;
Human-Computer Interaction
;
Multimedia Systems
;
Ubiquitous Learning
;
User Needs
Abstract:
Today, Human-Machine interaction represents a certain potential for autonomy especially of dependant people. Automatic Lip-reading system is one of the different assistive technologies for hearing impaired or elderly people. The need for an automatic lip-reading system is ever increasing. Extraction and reliable analysis of facial movements make up an important part in many multimedia systems such as videoconference, low communication systems,
lip-reading systems. We can imagine, for example, a dependent person ordering a machine with an easy lip movement or by a simple visemes (visual phoneme) pronunciation. We present in this paper a new approach for lip localization and feature extraction in a speaker’s face. The extracted visual information is then classified in order to recognize the uttered viseme. We have developed our Automatic Lip Feature Extraction prototype (ALiFE). ALiFE prototype is evaluated with a multiple speakers under natural conditions. Experiments include a grou
p of French visemes by different speakers. Results revealed that our system recognizes 92.50 % of French visemes.
(More)