BERT Semantic Context Model for Efficient Speech Recognition

Irina Illina, Dominique Fohr

2022

Abstract

In this work, we propose to better represent the scores of the recognition system and to go beyond a simple combination of scores. We propose a DNN-based revaluation model that re-evaluates by pair of hypotheses. Each of these pairs is represented by feature vector including acoustic, linguistic and semantic information. In our approach, semantic information is introduced using BERT representation. Proposed rescoring approach can be particularly useful for noisy speech recognition.

Download


Paper Citation


in Harvard Style

Illina I. and Fohr D. (2022). BERT Semantic Context Model for Efficient Speech Recognition. In Proceedings of the 1st International Conference on Cognitive Aircraft Systems - Volume 1: ICCAS; ISBN 978-989-758-657-6, SciTePress, pages 20-23. DOI: 10.5220/0011948200003622


in Bibtex Style

@conference{iccas22,
author={Irina Illina and Dominique Fohr},
title={BERT Semantic Context Model for Efficient Speech Recognition},
booktitle={Proceedings of the 1st International Conference on Cognitive Aircraft Systems - Volume 1: ICCAS},
year={2022},
pages={20-23},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011948200003622},
isbn={978-989-758-657-6},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 1st International Conference on Cognitive Aircraft Systems - Volume 1: ICCAS
TI - BERT Semantic Context Model for Efficient Speech Recognition
SN - 978-989-758-657-6
AU - Illina I.
AU - Fohr D.
PY - 2022
SP - 20
EP - 23
DO - 10.5220/0011948200003622
PB - SciTePress