“TTS-project-synthesis”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(2位用户的20个中间修订版本未显示)
第6行: 第6行:
  
 
=Introduction=
 
=Introduction=
xxx
+
We are interested in a flexible syntehsis based on neural model . The basic idea is that since the neural model can be
 +
traind with multiple conditions, we can treat speaker and emotion as the conditional factors. We use the speaker vector
 +
and emotion vector as addiiontal input to the model, and then train a single model that can produce sound of different
 +
speakers and different emotions.
  
=Sample waves=
+
In the following experiments, we use a simple DNN architecture to implement the training. The vocoder is WORD.
<h1> <font color="red">Singal-training</font></h1>
+
  
string:好雨知时节,当春乃发声,随风潜入夜,润物细无声
+
=Experiments=
 +
 
 +
==Mono-speaker==
 +
 
 +
The first step is mono-speaker systems. We trained three systems: a female, a male and a child, each with a
 +
single network. The performance is like the ofllowing.
 +
 
 +
Synthesis text:好雨知时节,当春乃发声,随风潜入夜,润物细无声
 +
 
 +
*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/female01/female01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/male01/male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 +
*Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/child01.neutral/child01-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
==Multi-speaker==
 +
 
 +
Now we combine all the data from male, female and child to train a single model.
 +
 
 +
===Without Speaker-vector===
 +
 
 +
The first experiment is that the data are blindly combined, without any indicator of speakers.
 +
 
 +
*Female & Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-male01/female01-male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
*Female & Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-child01.neutral/female01-child.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
*Male & Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/male01-child01.neutral/male01_child01.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
 
 +
===With Speaker-vector===
 +
 
 +
Now we use speaker vector as an indicator of the speaker trait.
 +
 
 +
*Specific person
 +
 
 +
Firstly, use the speaker fector to specifiy a particular person:
 +
 
 +
:*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/female01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
:*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/male01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
*Interpolate of different person
 +
 
 +
Now let's produce interpolated voice by interpolating two speakers: female and amle.
 +
 
 +
:* Female & Male with different ratio
 +
 
 +
::*(1) 0.0:1.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_0_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(2) 0.1:0.9[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_1_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(3) 0.2:0.8[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_2_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(4) 0.3:0.7[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_3_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(5) 0.4:0.6[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_4_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(6) 0.5:0.5[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_5_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(7) 0.6:0.4[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_6_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(8) 0.7:0.3[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_7_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(9) 0.8:0.2[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_8_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(10) 0.9:0.1[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_9_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
::*(11) 1.0:0.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_10_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
==Mono-speaker Multi-Emotion==
 +
 
 +
Using emotion vectors can specify which emotio to use, and the emotion can be also interpolated.
 +
 
 +
*Specific emotion
 +
:* Neutral emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
:* Happy emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-happy_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
:* Sorrow emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-sorrow_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
:* Angry emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
*Interpolation emotion
 +
:* Angry & neutral with different ratio
 +
::*(1) 0.0:1.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_0_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(2) 0.1:0.9 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(3) 0.2:0.8 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_2_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(4) 0.3:0.7 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_3_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(5) 0.4:0.6 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_4_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(6) 0.5:0.5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(7) 0.6:0.4 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_6_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(8) 0.7:0.3 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_7_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(9) 0.8:0.2 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_8_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(10) 0.9:0.1 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_9_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
::*(11) 1.0:0.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
 
 +
==Multi-speaker Multi-emotion==
 +
 
 +
Finally, all the data (different speakers and different emotions) are combined together. Note that only the child voice
 +
has different emotions of training data. We hope that this emotion can be learned so that we can generate voice of
 +
other speakers with emotion, although they do not have any training data with emtoions.
  
 
*Female
 
*Female
[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/female01/female01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
+
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
 +
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
  
 
*Male
 
*Male
<a name = "Wav5">好雨知时节,当春乃发声,随风潜入夜,润物细无声</a><br>
+
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
<audio src='http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/male01/male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav' controls="controls"></audio>
+
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
   
+
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
*Child
+
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]
<a name = "Wav5">好雨知时节,当春乃发声,随风潜入夜,润物细无声</a><br>
+
 
<audio src='http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/child01.neutral/child01-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav' controls="controls"></audio>
+
=MLPG Comparation=
 +
We compare the different implementation of mlpg AS merlin does(mlpg.py and fast_mlpg.py).
 +
There are three implementations:
 +
:*mlpg: As mlpg.py while compute all the dimension of delta features(including lf0/bap/mgc, the dim is 1/5/60 respectively)
 +
:*mlpg-lossy: Wrong implementation of mlpg.py by only considering the first dimension of global co-variance.
 +
:*fast-mlpg: As fast_mlpg.py in merlin.
 +
 
 +
 
 +
*Computation Time(Estimation)
 +
-----------------------------------------------------------------
 +
    alg.    |    lf0(dim=1)    |    bap(dim=5)  |  mgc(dim=60)
 +
  mlpg-lossy |      100000      |    130000      |  160000   
 +
    mlpg    |      130000      |    500000      |  6200000   
 +
fast-mlpg  |      60000      |    300000      |  3580000
 +
  avg-rate  |      1:1.3:0.6  |    1:4:2+      |  1:40:20+
 +
-----------------------------------------------------------------
 +
 
 +
* Synthesis waves
 +
:*text
 +
::*5='好雨知时节,当春乃发声,随风潜入夜,润物细无声。'
 +
::*13='大熊猫最大的愿望就是拍一张自己的照片。'
 +
 
 +
* no-mlpg
 +
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg-no_5.wav]
 +
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg-no_13.wav]
 +
 
 +
* mlpg-lossy
 +
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_5.wav]
 +
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_13.wav]
 +
 
 +
* mlpg
 +
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_5.wav]
 +
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_13.wav]
  
<h1> <font color="red">Mix-training without speaker-vector</font></h1>
+
* fast-mlpg
*Female & male
+
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_5.wav]
<a name = "Wav5">好雨知时节,当春乃发声,随风潜入夜,润物细无声</a><br>
+
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_13.wav]
<audio src='http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-male01/female01-male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav' controls="controls"></audio>
+

2019年2月18日 (一) 12:12的最后版本

Project name

Text To Speech

Project members

Dong Wang, Zhiyong Zhang

Introduction

We are interested in a flexible syntehsis based on neural model . The basic idea is that since the neural model can be traind with multiple conditions, we can treat speaker and emotion as the conditional factors. We use the speaker vector and emotion vector as addiiontal input to the model, and then train a single model that can produce sound of different speakers and different emotions.

In the following experiments, we use a simple DNN architecture to implement the training. The vocoder is WORD.

Experiments

Mono-speaker

The first step is mono-speaker systems. We trained three systems: a female, a male and a child, each with a single network. The performance is like the ofllowing.

Synthesis text:好雨知时节,当春乃发声,随风潜入夜,润物细无声

Multi-speaker

Now we combine all the data from male, female and child to train a single model.

Without Speaker-vector

The first experiment is that the data are blindly combined, without any indicator of speakers.

  • Female & Male[4]
  • Female & Child[5]
  • Male & Child[6]


With Speaker-vector

Now we use speaker vector as an indicator of the speaker trait.

  • Specific person

Firstly, use the speaker fector to specifiy a particular person:

  • Interpolate of different person

Now let's produce interpolated voice by interpolating two speakers: female and amle.

  • Female & Male with different ratio
  • (1) 0.0:1.0[9]

Mono-speaker Multi-Emotion

Using emotion vectors can specify which emotio to use, and the emotion can be also interpolated.

  • Specific emotion
  • Neutral emotion [20]
  • Happy emotion [21]
  • Sorrow emotion [22]
  • Angry emotion [23]
  • Interpolation emotion
  • Angry & neutral with different ratio

Multi-speaker Multi-emotion

Finally, all the data (different speakers and different emotions) are combined together. Note that only the child voice has different emotions of training data. We hope that this emotion can be learned so that we can generate voice of other speakers with emotion, although they do not have any training data with emtoions.

  • Female
  • Male

MLPG Comparation

We compare the different implementation of mlpg AS merlin does(mlpg.py and fast_mlpg.py). There are three implementations:

  • mlpg: As mlpg.py while compute all the dimension of delta features(including lf0/bap/mgc, the dim is 1/5/60 respectively)
  • mlpg-lossy: Wrong implementation of mlpg.py by only considering the first dimension of global co-variance.
  • fast-mlpg: As fast_mlpg.py in merlin.


  • Computation Time(Estimation)

   alg.    |    lf0(dim=1)    |    bap(dim=5)   |   mgc(dim=60) 
mlpg-lossy |      100000      |     130000      |   160000    
   mlpg    |      130000      |     500000      |   6200000    
fast-mlpg  |      60000       |     300000      |   3580000
 avg-rate  |      1:1.3:0.6   |     1:4:2+      |   1:40:20+

  • Synthesis waves
  • text
  • 5='好雨知时节,当春乃发声,随风潜入夜,润物细无声。'
  • 13='大熊猫最大的愿望就是拍一张自己的照片。'
  • no-mlpg
  • mlpg-lossy
  • mlpg
  • fast-mlpg