“ASR:2014-12-22”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“==Speech Processing == === AM development === ==== Environment ==== * Already buy 3 760GPU * grid-9/12 760GPU crashed again; grid-11 shutdown automatically. * Chang...”为内容创建页面)
 
第4行: 第4行:
 
==== Environment ====
 
==== Environment ====
 
* Already buy 3 760GPU
 
* Already buy 3 760GPU
* grid-9/12 760GPU crashed again; grid-11 shutdown automatically.
 
* Change 760gpu card of grid-12 and grid-14(+).
 
 
* First down-frequency of gpu760.
 
* First down-frequency of gpu760.
 +
* grid-11/12 shut-down automatically
 +
* Re-exchange GPU760 of grid-12 and grid-14
  
 
==== Sparse DNN ====
 
==== Sparse DNN ====
* Performance improvement found when pruned slightly
 
* need retraining for unpruned one; training loss 
 
 
* details at http://liuc.cslt.org/pages/sparse.html
 
* details at http://liuc.cslt.org/pages/sparse.html
 +
* To conduct MPE-training
  
 
==== RNN AM====
 
==== RNN AM====
* Initial nnet seems not very well, need to be pre-trained or test lower learn-rate.
 
* For AURORA 4 1h/epoch, model train done.
 
* Using AURORA 4 short-sentence with a smaller number of targets.(+)
 
 
* Adjusting the learning rate.(+)
 
* Adjusting the learning rate.(+)
 
* Trying toolkit of Microsoft.(+)
 
* Trying toolkit of Microsoft.(+)
 
* details at http://liuc.cslt.org/pages/rnnam.html
 
* details at http://liuc.cslt.org/pages/rnnam.html
* Reading papers
 
  
 
==== A new nnet training scheduler ====
 
==== A new nnet training scheduler ====
* Initial code done. No better than original one considering of taking much more iterations.
 
 
* details at http://liuc.cslt.org/pages/nnet-sched.html
 
* details at http://liuc.cslt.org/pages/nnet-sched.html
 +
* Test 500h dataset, 36-epchs/8-batches --Similar performance observed compared with std recipe
 
* Test on 4000h dataset.
 
* Test on 4000h dataset.
  
第30行: 第25行:
  
 
* Drop out(+)
 
* Drop out(+)
 
:* Use different proportion of noise data to investigate the effect of xEnt and mpe and dropout
 
:** Problem 1) The effect of dropout in different noise proportion;
 
            2) The effect of MPE in different noise proportion;
 
            3) The effect of MPE+dropout in different noise proportion.
 
 
 
:**http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=261
 
:**http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=261
 
:* Conclusion
 
:* Conclusion
第42行: 第31行:
 
:** Find and test unknown noise test-data.(++)
 
:** Find and test unknown noise test-data.(++)
  
* MaxOut
+
* MaxOut && P-norm
:* Pretraining based maxout, can't use large learning-rate.
+
 
+
* P-norm
+
 
:* Need to solve the too small learning-rate problem
 
:* Need to solve the too small learning-rate problem
 
:** Add one normalization layer after the pnorm-layer
 
:** Add one normalization layer after the pnorm-layer
 
:** Add L2-norm upper bound
 
:** Add L2-norm upper bound
  
* Convolutive network (+)
+
* Convolutive network
:*
+
:* DAE test: to test various noises(car/echo/airport....)
 +
---------------------------------------------------------------------------------------------------------
 +
              | group-size | cnn-output| test_clean_wv1 | test_car_wv1 |test_babble_wv1 | test_airport_wv1
 +
---------------------------------------------------------------------------------------------------------
 +
max_out_32    | 64        | 32        | 6.82          | 17.75        |36.77          | 35.61
 +
---------------------------------------------------------------------------------------------------------
 +
max_out_128    | 16        | 128      | 6.09          | 15.92        |31.74          | 30.85
 +
---------------------------------------------------------------------------------------------------------
 +
max_out_256    | 8          | 256      | 6.38          | 16.47        |31.32          | 31.93
 +
---------------------------------------------------------------------------------------------------------
 +
max_out_32_MPE | 64        | 32        | 6.25          | 18.62        |49.07          | 46.25
 +
---------------------------------------------------------------------------------------------------------
 +
cnn_layer_3_3                          | 5.73          | 18.09        |30.92          | 30.81
 +
---------------------------------------------------------------------------------------------------------
 +
  cnn_std                              | 5.73          | 17.25        |27.59          | 29.07
 +
-------------------------------------------------------------------------------------------------------
 +
  dnn_std                              | 6.04          | 16.37        |27.76          | 29.91
 +
-------------------------------------------------------------------------------------------------------
 +
 
  
 
====DAE(Deep Atuo-Encode)====
 
====DAE(Deep Atuo-Encode)====
  (1) train_clean
 
    drop-retention/testcase(WER)| test_clean_wv1  | test_airport_wv1 | test_babble_wv1 | test_car_wv1
 
  ---------------------------------------------------------------------------------------------------------
 
      std-xEnt-sigmoid-baseline| 6.04            |    29.91        |  27.76        | 16.37
 
  ---------------------------------------------------------------------------------------------------------
 
      std+dae_cmvn_noFT_2-1200 | 7.10            |    15.33        |  16.58        | 9.23
 
  ---------------------------------------------------------------------------------------------------------
 
    std+dae_cmvn_splice5_2-100  | 8.19            |    15.21        |  15.25        | 9.31
 
  ---------------------------------------------------------------------------------------------------------
 
 
 
:* test on XinWenLianBo music. results on  
 
:* test on XinWenLianBo music. results on  
 
:** http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhaomy&step=view_request&cvssid=318
 
:** http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhaomy&step=view_request&cvssid=318
第75行: 第69行:
  
 
====Speech rate training====
 
====Speech rate training====
* Data ready on tencent set; some errors on speech rate dependent model. error fixed.
+
* 64.41->34.4
* Retrain new model(+)
+
 
+
====Scoring====
+
* Timber Comparison done.
+
* harmonics based timber comparison: frequency based feature is better. done
+
* GMM based timber comparison is done. Similar to speaker recognition. done
+
* TODO: Code checkin and '''technique report'''. done
+
  
 
====Confidence====
 
====Confidence====
第91行: 第78行:
  
 
===Speaker ID===
 
===Speaker ID===
* Preparing GMM-based server.
+
:* Non-stream GMM:wer-2.28%
* EER ~ 4% (GMM-based system)--Text independent
+
  seperate3-ivector:wer-3.54 single-ivector:wer-1.57 
* EER ~ 6%(1s) / 0.5%(5s) (GMM-based system)--Text dependent
+
  seperate-PLDA:wer-0.87 single-PLDA:wer-1.04 
* test different number of components; fast i-vector computing
+
:* Code ready                   
:* Test with number recordings, The 256 number component is the best.  
+
:* Test with text-dependent recordings, The 1024 number component is the best.
+
:* Interpolation alpha is not sensitive.
+
  
 
===Language ID===
 
===Language ID===

2014年12月22日 (一) 08:05的版本

Speech Processing

AM development

Environment

  • Already buy 3 760GPU
  • First down-frequency of gpu760.
  • grid-11/12 shut-down automatically
  • Re-exchange GPU760 of grid-12 and grid-14

Sparse DNN

RNN AM

A new nnet training scheduler

Dropout & Maxout & Convolutive network

  • Drop out(+)
 Dropout is effective for minority.
    • Find and test unknown noise test-data.(++)
  • MaxOut && P-norm
  • Need to solve the too small learning-rate problem
    • Add one normalization layer after the pnorm-layer
    • Add L2-norm upper bound
  • Convolutive network
  • DAE test: to test various noises(car/echo/airport....)

              | group-size | cnn-output| test_clean_wv1 | test_car_wv1 |test_babble_wv1 | test_airport_wv1

max_out_32 | 64 | 32 | 6.82 | 17.75 |36.77 | 35.61


max_out_128 | 16 | 128 | 6.09 | 15.92 |31.74 | 30.85


max_out_256 | 8 | 256 | 6.38 | 16.47 |31.32 | 31.93


max_out_32_MPE | 64 | 32 | 6.25 | 18.62 |49.07 | 46.25


cnn_layer_3_3                          | 5.73           | 18.09        |30.92           | 30.81

 cnn_std                               | 5.73           | 17.25        |27.59           | 29.07

 dnn_std                               | 6.04           | 16.37        |27.76           | 29.91


DAE(Deep Atuo-Encode)

Denoising & Farfield ASR

  • ICASSP paper submitted.
  • HOLD

VAD

  • Harmonics and Teager energy features being investigation (++)

Speech rate training

  • 64.41->34.4

Confidence

  • Reproduce the experiments on fisher dataset.
  • Use the fisher DNN model to decode all-wsj dataset
  • preparing scoring for puqiang data
  • HOLD

Speaker ID

  • Non-stream GMM:wer-2.28%
  seperate3-ivector:wer-3.54 single-ivector:wer-1.57  
  seperate-PLDA:wer-0.87 single-PLDA:wer-1.04   
  • Code ready

Language ID

  • GMM-based language is ready.
  • Delivered to Jietong
  • Prepare the test-case

Voice Conversion

  • Yiye is reading materials(+)


Text Processing

LM development

Domain specific LM

  • domain lm
  • Sougou2T : kn-count continue .
  • lm v2.0 done,just to test the wer.
  • new dict.

tag LM

  • summary done
  • need to do
  • tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong (hold)
  • paper done,begin to modify .

RNN LM

  • rnn
  • test wer RNNLM on Chinese data from jietong-data(this week)
  • generate the ngram model from rnnlm and test the ppl with different size txt.[1]
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

Word2Vector

W2V based doc classification

  • Initial results variable Bayesian GMM obtained. Performance is not as good as the conventional GMM.(hold)
  • Non-linear inter-language transform: English-Spanish-Czch: wv model training done, transform model on investigation

Knowledge vector

  • Knowledge vector started
  • code done,to test the baseline with a task.
  • problem with weight.

relation

  • Accomplish transE with almost the same performance as the paper did(even better)[2]

Character to word

  • Character to word conversion(hold)
  • prepare the task: word similarity
  • prepare the dict.

Translation

  • v5.0 demo released
  • cut the dict and use new segment-tool

QA

improve fuzzy match

  • add Synonyms similarity using MERT-4 method(hold)

improve lucene search

  • mutli query's performance improve from 66.228 to 68.672. detail:[3]
  • check the MERT problem that doesn't mach the qa

XiaoI framework

  • ner from xiaoI done

query normalization

  • using NER to normalize the word
  • new inter will install SEMPRE