This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. This was my master's thesis.
SV2TTS is a deep learning framework in three stages. In the first stage, one creates a digital representation of a voice from a few seconds of audio. In the second and third stages, this representation is used as reference to generate speech given arbitrary text.
Video demonstration (click the picture):
|1806.04558||SV2TTS||Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis||This repo|
|1802.08435||WaveRNN (vocoder)||Efficient Neural Audio Synthesis||fatchord/WaveRNN|
|1703.10135||Tacotron (synthesizer)||Tacotron: Towards End-to-End Speech Synthesis||fatchord/WaveRNN|
|1710.10467||GE2E (encoder)||Generalized End-To-End Loss for Speaker Verification||This repo|
10/01/22: I recommend checking out CoquiTTS. It's a good and up-to-date TTS repository targeted for the ML community. It can also do voice cloning and more, such as cross-language cloning or voice conversion.
28/12/21: I've done a major maintenance update. Mostly, I've worked on making setup easier. Find new instructions in the section below.
14/02/21: This repo now runs on PyTorch instead of Tensorflow, thanks to the help of @bluefish.
13/11/19: I'm now working full time and I will rarely maintain this repo anymore. To anyone who reads this:
20/08/19: I'm working on resemblyzer, an independent package for the voice encoder (inference only). You can use your trained encoder models from this repo with it.
venv, but this is optional.
pip install -r requirements.txt
Pretrained models are now downloaded automatically. If this doesn't work for you, you can manually download them here.
Before you download any dataset, you can begin by testing your configuration with:
If all tests pass, you're good to go.
For playing with the toolbox alone, I only recommend downloading
LibriSpeech/train-clean-100. Extract the contents as
<datasets_root> is a directory of your choosing. Other datasets are supported in the toolbox, see here. You're free not to download any dataset, but then you will need your own data as audio files or you will have to record it with the toolbox.
You can then try the toolbox:
python demo_toolbox.py -d <datasets_root>
depending on whether you downloaded any datasets. If you are running an X-server or if you have the error
Aborted (core dumped), see this issue.