2013-05-24

来自cslt Wiki
2013年5月31日 (五) 05:23Wangd讨论 | 贡献的版本

跳转至: 导航搜索

Data sharing

  • LM count files still undelivered!

DNN progress

Experiments

  • sparse DNN: sticky training (retrain the nnet while keeping the sparsness)

zero small values(test set: 1900):

threshold 0 0.01 0.03 0.05 0.08 0.1 0.2 0.3
shrinkage% 0.0 4.3 12.7 20.9 32.5 39.5 66.4 8.16
without sticky: WER 7.55 7.60 7.62 7.66 7.72 7.87 9.46 53.23
with sticky: WER 7.55 7.57 7.60 7.60 7.63 7.64

The conclusion is the same as what Tencent reported last week: trimming small values do not harm much the performance. This is good for designing fast DNN frontend, though we probably need rely on sparse matrices instead of the current structure.

  • fixed-point DNN

Another interesting thing is to use fixed-point numbers to represent DNN weights. This can be obtained by some hack approaches. We tested a simple mapping as the follows: y=-log(abs(x)/1000.0)*20. The performance is as the following:


NN WER% on 1900
floating 7.25
fixed-point 7.30

It looks that the fixed point representation does not harm the performance much. This opens the door to use fixed-point NN on embedded devices.

  • fixed-pint HCLG

Another aspect for boosting the kaldi-style decoding is to use fixed point FST. We tried the approach with different residuals, as following:

NN WER% on 1900
floating 7.25
y=int(x*10) 7.12
y=int(x*50) 7.27

It seems that fixed-point FST does not harm the performance.

Tencent exps

  • 1000小时训练DNN模型,同时跑2个有关学习率的实验。一个learning rate指数下降,一个采用newbob的方式。实验接近尾声,下周前可以全部结束实验。对比效果后,采用较好的学习率递减方式,训练更大规模数据的dnn模型。
nice. we are looking forward to the 1000 hour results..
  • 解码器端尝试了sse,定点化等加速优化策略,仍不能再高并发的环境下,将实时率降到1以下,直接在测试端采用low-rank matrix approximations,测试性能衰减较多。训练段使用这种方法,公式有待推导。
we probably need to rely on the sparse net solution plus fix point computing. The low rank seems less reasonable than L1. The behind idea of the low rank is to treat the weight matrix between two hidden layers as a mapping function spanning in low rank space, which may help to recover some prominent patterns however is not directly related to the objective function and not directly related to less computing. Nevertheless, it deserves to try, just do the low-rank at the end of each bp iteration.


TO DO:

  • pretrain的2种策略:rbm和discriminative pretrain方法。
MS suggested the latter, while the performance difference for large networks (more than 7 layers) is not significant according to the publications (see Frank's ASRU paper). For large data, it deserves to try, though the rbm approach is highly costly.
  • hmm-dnn训练之后,使用hmm-dnn模型alignment,更新转移概率之后,重新训练hmm-dnn性能。
should be promising
  • hmm-dnn+sequential dt训练性能提升比例。
  • dnn训练端采用low-rank的方式。


GPU & CPU merge

  1. on progress.


Kaldi/HTK merge

  • HTK2Kaldi: hold.
  • Kaldi2HTK: still under debugging.

Embedded progress

  • Status:
check the VAD, recall some missed segments.
Test Set #utt WER RT
cw 993 13.64 0.07
hfc 986 9.84 0.08
zz 984 16.87 0.08
The first large scale Chinese scale model training done, with reasonable performance. Need to start cluster-based parallel training (SGE is not supported by the sphinx).
  • To be done
  1. parallel training.
  2. Kaldi based embedded engine design.
  3. debug the random output issue with the demo.