loading
Documents

Research.Publish.Connect.

Paper

Paper Unlock

Authors: João Freitas 1 ; António Teixeira 2 and Miguel Sales Dias 3

Affiliations: 1 Microsoft Language Development Center and Universidade de Aveiro, Portugal ; 2 Universidade de Aveiro, Portugal ; 3 Microsoft Language Development Center and ISCTE-University Institute of Lisbon, Portugal

ISBN: Not Available

Keyword(s): Silent Speech, Human-Computer Interface, European Portuguese, Multimodal, Visual Speech Recognition, Surface Electromyography, Acoustic Doppler Sensing.

Abstract: A Silent Speech Interface (SSI) performs Automatic Speech Recognition (ASR) in the absence of an intelligible acoustic signal and can be used as a human-computer interface modality in high-background-noise environments such as living rooms, or in aiding speech-impaired individuals such as elderly persons. By acquiring data from elements of the human speech production process – from glottal and articulators activity, their neural pathways or the central nervous system – an SSI produces an alternative digital representation of speech, which can be recognized and interpreted as data, synthesized directly or routed into a communications network. Nowadays, conventional ASR systems rely only on acoustic information, making them susceptible to problems like environmental noise, privacy, information disclosure and also excluding users with speech impairments. To tackle this problem in the context of ASR for Human-Computer Interaction, we propose a novel SSI based on multiple modalities in E uropean Portuguese (EP), a language for which no SSI has yet been developed. After a state-of-the-art assessment, we have selected less-invasive modalities - Vision, Surface Electromyography and Ultrasound – in order to obtain a more complete representation of the human speech production model. Our aim is now to develop a multimodal SSI prototype adapted to EP and evaluate its usability in real-world scenarios. (More)

PDF ImageFull Text

Download
Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 34.201.121.213

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Freitas, J.; Freitas, J.; Teixeira, A. and Dias, M. (2014). Silent Speech for Human-Computer Interaction.In Doctoral Consortium - DCBIOSTEC, (BIOSTEC 2014) ISBN Not Available, pages 18-27

@conference{dcbiostec14,
author={João Freitas. and João Freitas. and António Teixeira. and Miguel Sales Dias.},
title={Silent Speech for Human-Computer Interaction},
booktitle={Doctoral Consortium - DCBIOSTEC, (BIOSTEC 2014)},
year={2014},
pages={18-27},
publisher={SciTePress},
organization={INSTICC},
doi={},
isbn={Not Available},
}

TY - CONF

JO - Doctoral Consortium - DCBIOSTEC, (BIOSTEC 2014)
TI - Silent Speech for Human-Computer Interaction
SN - Not Available
AU - Freitas, J.
AU - Freitas, J.
AU - Teixeira, A.
AU - Dias, M.
PY - 2014
SP - 18
EP - 27
DO -

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.