2013-06-28
来自cslt Wiki
目录
Data sharing
- LM count files still undelivered!
DNN progress
Experiments
- Sparse DNN.
1. With Atlas without any change, on the ARM platform, obtained RT 2.0. Will change the Atlas code to support sparse matrices.
Tencent exps
1:本周大规模数据DNN实验结束,基本在第8次迭代收敛,
Old Baseline New Baeline DNN-1000小时 DNN-6000小时(7次迭代)DNN-6000小时(8次迭代)
1900 8.4 6.8 4.3 3.9 3.7 2044 22.4 15.7 12.7 10.7 10.6 online1 35.6 32.7 25.8 24.6 24.6 online2 29.6 27.3 22.1 21.1 20.8 map 24.5 15.8 13.4 8.7 8.4 general 36 25.1 19.3 15.9 16
2:dnn+dt代码完成,下周可以开始实验。首先将在小数据上进行。
GPU & CPU merge
- Hold
RNN LM progress
- Use 100M text, 10k lexicon in training. Validation test set is obtained from the transcription of the Tencent online1 speech data.
- 100 hidden layer, 1 hidden layer, 3-gram
- training time: 7 hour, 8GB
- prediction time: quick, 8GB
- 3-gram PPL: 227.323 WER: 36%
- RNN PPL: 170.056129 WER: 41%
- 3-gram+RNN: PPL: 0.25RNN+0.75 3-gram: 180.0 WER 35%
- possibly a bug when computing PPL with the RNN toolkit.
Embedded progress
- Status:
- 1000 test words + 2000 noise words
before | after utt 952 3317 %wer 6.26% 11.04% RT 0.07 0.20
This means the GMM-based system highly relies on the vocabulary. It may work well with small lexica, but difficult with large ones.
- Run other optimization parameters:
option | RT | %wer
original 0.07 6.28
-ds 0.06 6.33% -topn 0.06 6.80% -maxwpf - - -maxhmmpf - - -kdmaxdepth - - -kdmaxbbi - - -pl_window - -
- To be done
- sparse DNN based Kaldi engine
- sparse DNN based PS engine