loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Yusuke Sekii 1 ; Ryohei Orihara 1 ; Keisuke Kojima 2 ; Yuichi Sei 1 ; Yasuyuki Tahara 1 and Akihiko Ohsuga 1

Affiliations: 1 University of Electro-Communications, Japan ; 2 Solid Sphere and inc., Japan

Keyword(s): Voice Conversion, Autoencoder, Deep Learning, Deep Neural Network, Spectral Envelope.

Related Ontology Subjects/Areas/Topics: AI and Creativity ; Artificial Intelligence ; Biomedical Engineering ; Biomedical Signal Processing ; Computational Intelligence ; Evolutionary Computing ; Health Engineering and Technology Applications ; Human-Computer Interaction ; Knowledge Discovery and Information Retrieval ; Knowledge-Based Systems ; Machine Learning ; Methodologies and Methods ; Neural Networks ; Neurocomputing ; Neurotechnology, Electronics and Informatics ; Pattern Recognition ; Physiological Computing Systems ; Sensor Networks ; Signal Processing ; Soft Computing ; Symbolic Systems ; Theory and Methods

Abstract: Most of voice conversion (VC) methods were dealing with a one-to-one VC issue and there were few studies that tackled many-to-one / many-to-many cases. It is difficult to prepare the training data for an application with the methods because they require a lot of parallel data. Furthermore, the length of time required to convert a speech by Deep Neural Network (DNN) gets longer than pre-DNN methods because the DNN-based methods use complicated networks. In this study, we propose a VC method using autoencoders in order to reduce the amount of the training data and to shorten the converting time. In the method, higher-order features are extracted from acoustic features of source speakers by an autoencoder trained with source speakers’ data. Then they are converted to higher-order features of a target speaker by DNN. The converted higher-order features are restored to the acoustic features by an autoencoder trained with data drawn from the target speaker. In the evaluation exper iment, the proposed method outperforms the conventional VC methods that use Gaussian Mixture Models (GMM) and DNNs in both one-to-one conversion and many-to-one conversion with a small training set in terms of the conversion accuracy and the converting time. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.208.172.3

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Sekii, Y.; Orihara, R.; Kojima, K.; Sei, Y.; Tahara, Y. and Ohsuga, A. (2017). Fast Many-to-One Voice Conversion using Autoencoders. In Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART; ISBN 978-989-758-220-2; ISSN 2184-433X, SciTePress, pages 164-174. DOI: 10.5220/0006193301640174

@conference{icaart17,
author={Yusuke Sekii. and Ryohei Orihara. and Keisuke Kojima. and Yuichi Sei. and Yasuyuki Tahara. and Akihiko Ohsuga.},
title={Fast Many-to-One Voice Conversion using Autoencoders},
booktitle={Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART},
year={2017},
pages={164-174},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006193301640174},
isbn={978-989-758-220-2},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART
TI - Fast Many-to-One Voice Conversion using Autoencoders
SN - 978-989-758-220-2
IS - 2184-433X
AU - Sekii, Y.
AU - Orihara, R.
AU - Kojima, K.
AU - Sei, Y.
AU - Tahara, Y.
AU - Ohsuga, A.
PY - 2017
SP - 164
EP - 174
DO - 10.5220/0006193301640174
PB - SciTePress