“2013-05-24”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第12行: 第12行:
 
!threshold!!      0 !!    0.01 !!    0.03 !!    0.05  !!  0.08 !!    0.1  !! 0.2 !!0.3
 
!threshold!!      0 !!    0.01 !!    0.03 !!    0.05  !!  0.08 !!    0.1  !! 0.2 !!0.3
 
|-
 
|-
|shrinkage%  ||  0.0  || 4.3  ||  12.7  ||  20.9  ||  32.5  || 39.5    || 66.4  || 8.16
+
|shrinkage%  ||  0.0  || 4.3  ||  12.7  ||  20.9  ||  32.5  || 39.5    || 66.4  || 81.6
 
|-
 
|-
 
|without sticky: WER  || 7.55  ||7.60 ||  7.62 ||  7.66 ||  7.72 ||  7.87 || 9.46 || 53.23
 
|without sticky: WER  || 7.55  ||7.60 ||  7.62 ||  7.66 ||  7.72 ||  7.87 || 9.46 || 53.23
第20行: 第20行:
 
|}
 
|}
  
The conclusion is the same as what Tencent reported last week: trimming small values do not harm much the performance. This is good for designing fast DNN frontend, though we probably need rely on sparse matrices instead of the current structure.  
+
The conclusion is that with the L2 retrain, the DNN performance is largely called back. Waiting for the results with  extremely sparse networks.
  
* fixed-point DNN
+
* fixed-point DNN forwarding
  
Another interesting thing is to use fixed-point numbers to represent DNN weights. This can be obtained by some hack approaches. We tested a simple mapping as the follows: y=-log(abs(x)/1000.0)*20. The performance is as the following:
+
According to the fixed-point FST and NN, and the results of the sparse NN, we are working on fast NN decoder which is suitable for embedded device. The work is just started.
 
+
 
+
{| class="wikitable"
+
!    NN !!  WER% on 1900
+
|-
+
|floating ||  7.25
+
|-
+
|fixed-point  || 7.30
+
|-
+
|}
+
 
+
It looks that the fixed point representation does not harm the performance much. This opens the door to use fixed-point NN on embedded devices.
+
 
+
* fixed-pint HCLG
+
 
+
Another aspect for boosting the kaldi-style decoding is to use fixed point FST. We tried the approach with different residuals, as following:
+
 
+
{| class="wikitable"
+
!    NN !!  WER% on 1900
+
|-
+
|floating ||  7.25
+
|-
+
|y=int(x*10)|| 7.12
+
|-
+
|y=int(x*50)|| 7.27
+
|-
+
|}
+
 
+
It seems that fixed-point FST does not harm the performance.  
+
  
 
=== Tencent exps ===
 
=== Tencent exps ===
  
 +
本周1000小时实验已结束,实验性能如下:
  
*1000小时训练DNN模型,同时跑2个有关学习率的实验。一个learning rate指数下降,一个采用newbob的方式。实验接近尾声,下周前可以全部结束实验。对比效果后,采用较好的学习率递减方式,训练更大规模数据的dnn模型。
+
Old Baseline New Baeline DNN
 
+
1900 8.4 6.8 4.3
:nice. we are looking forward to the 1000 hour results..
+
2044 22.4 15.7 12.7
 
+
online1 35.6 32.7 25.8
*解码器端尝试了sse,定点化等加速优化策略,仍不能再高并发的环境下,将实时率降到1以下,直接在测试端采用low-rank matrix approximations,测试性能衰减较多。训练段使用这种方法,公式有待推导。
+
online2 29.6 27.3 22.1
 
+
map 24.5 15.8 13.4
:we probably need to rely on the sparse net solution plus fix point computing. The low rank seems less reasonable than L1. The behind idea of the low rank is to treat the weight matrix between two hidden layers as a mapping function spanning in low rank space, which may help to recover some prominent patterns however is not directly related to the objective function and not directly related to less computing. Nevertheless, it deserves to try, just do the low-rank at the end of each bp iteration.
+
notepad 16 8.1 5.6
 
+
general 36 25.1 19.3
 
+
speedup 26.8 14
TO DO:
+
*pretrain的2种策略:rbm和discriminative pretrain方法。
+
 
+
:MS suggested the latter, while the performance difference for large networks (more than 7 layers) is not significant according to the publications (see Frank's ASRU paper). For large data, it deserves to try, though the rbm approach is highly costly.  
+
 
+
*hmm-dnn训练之后,使用hmm-dnn模型alignment,更新转移概率之后,重新训练hmm-dnn性能。
+
 
+
:should be promising
+
 
+
*hmm-dnn+sequential dt训练性能提升比例。
+
 
+
*dnn训练端采用low-rank的方式。
+
  
 +
接下来计划:
 +
6000小时模型训练,dnn模型相关其他技术(序列化dt,alignment,pretrain)
  
 
=== GPU & CPU merge ===
 
=== GPU & CPU merge ===
第89行: 第51行:
  
 
* HTK2Kaldi: hold.
 
* HTK2Kaldi: hold.
* Kaldi2HTK: still under debugging.  
+
* Kaldi2HTK: hold and second priority
 +
 
 +
The above work is probably not very necessary since Tencent will fully migrated to the hybrid DNN approach, and therefore HTK will be never used.
  
 
== Embedded progress ==
 
== Embedded progress ==
 
*Status:
 
*Status:
: check the VAD, recall some missed segments.
+
: check the reference, and change the compiling options
 +
: the large-scale AM training based on the Tencent 400h data is done.
 +
: the random output problem is fixed.
  
 
{| class="wikitable"
 
{| class="wikitable"
! Test Set !! #utt !! WER !! RT
+
! Test Set !! #utt !! CMU !! Tencent
 
|-
 
|-
|  cw  || 993 || 13.64 || 0.07
+
|  cw  || 993 || 8.01(0.07) || 7.61(0.40)
 
|-
 
|-
|  hfc || 986 || 9.84 ||  0.08
+
|  hfc || 986 || 6.69(0.07) ||  5.48(0.40)
 
|-  
 
|-  
|  zz  || 984 || 16.87 || 0.08
+
|  zz  || 984 || 12.73(0.07) || 5.91(0.40)
 
|-
 
|-
 
|}
 
|}
  
:The first large scale Chinese scale model training done, with reasonable performance. Need to start cluster-based parallel training (SGE is not supported by the sphinx).
 
  
 
*To be done
 
*To be done
:# parallel training.
+
:# large scale parallel training.
:# Kaldi based embedded engine design.
+
:# NN based engine(dynamic and static).
:# debug the random output issue with the demo.
+

2013年5月31日 (五) 05:33的版本

Data sharing

  • LM count files still undelivered!

DNN progress

Experiments

  • sparse DNN: sticky training (retrain the nnet while keeping the sparsness)

zero small values(test set: 1900):

threshold 0 0.01 0.03 0.05 0.08 0.1 0.2 0.3
shrinkage% 0.0 4.3 12.7 20.9 32.5 39.5 66.4 81.6
without sticky: WER 7.55 7.60 7.62 7.66 7.72 7.87 9.46 53.23
with sticky: WER 7.55 7.57 7.60 7.60 7.63 7.64

The conclusion is that with the L2 retrain, the DNN performance is largely called back. Waiting for the results with extremely sparse networks.

  • fixed-point DNN forwarding

According to the fixed-point FST and NN, and the results of the sparse NN, we are working on fast NN decoder which is suitable for embedded device. The work is just started.

Tencent exps

本周1000小时实验已结束,实验性能如下:

Old Baseline New Baeline DNN 1900 8.4 6.8 4.3 2044 22.4 15.7 12.7 online1 35.6 32.7 25.8 online2 29.6 27.3 22.1 map 24.5 15.8 13.4 notepad 16 8.1 5.6 general 36 25.1 19.3 speedup 26.8 14

接下来计划: 6000小时模型训练,dnn模型相关其他技术(序列化dt,alignment,pretrain)

GPU & CPU merge

  1. on progress.


Kaldi/HTK merge

  • HTK2Kaldi: hold.
  • Kaldi2HTK: hold and second priority

The above work is probably not very necessary since Tencent will fully migrated to the hybrid DNN approach, and therefore HTK will be never used.

Embedded progress

  • Status:
check the reference, and change the compiling options
the large-scale AM training based on the Tencent 400h data is done.
the random output problem is fixed.
Test Set #utt CMU Tencent
cw 993 8.01(0.07) 7.61(0.40)
hfc 986 6.69(0.07) 5.48(0.40)
zz 984 12.73(0.07) 5.91(0.40)


  • To be done
  1. large scale parallel training.
  2. NN based engine(dynamic and static).