“Zhiyong Zhang”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第24行: 第24行:
  
 
=Reading Lists=
 
=Reading Lists=
*[[Efficient_mini-batch_training_for_stochastic_optimization |苏圣 2015-10-29 Efficient_mini-batch_training_for_stochastic_optimization.pdf ]]
+
*[[媒体文件:Efficient_mini-batch_training_for_stochastic_optimization.pdf |苏圣 2015-10-29 Efficient_mini-batch_training_for_stochastic_optimization ]]
 
*http://www.cs.cmu.edu/~muli/file/minibatch_sgd.pdf
 
*http://www.cs.cmu.edu/~muli/file/minibatch_sgd.pdf

2015年10月29日 (四) 07:13的版本


Papers To Read

  • 1, Learned-Norm pooling for deep feedforward and recurrent neural networks


Task schedules

Summary

   --------------------------------------------------------------------------------------------------------
    Priority | Tasks name                    |      Status          |     Notions
   --------------------------------------------------------------------------------------------------------    
        1    | Bi-Softmax                    | ■■■□□□□□□□ | 1400h am training and problem fixing
   --------------------------------------------------------------------------------------------------------
        2    | RNN+DAE                       | □□□□□□□□□□ |
   --------------------------------------------------------------------------------------------------------

Speech Recognition

Multi-lingual Am training

Bi-Softmax

  • Using two distinct softmax for English and Chinese data.
  • Testing on 100h-Ch+100h-En, better performance observed.
  • Now testing the source code on 1400h_8k data, but stange decoding results got.Need to further investigate.

Reading Lists