“2013-04-19”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第8行: 第8行:
  
 
  (1) 400 hour BN model.  
 
  (1) 400 hour BN model.  
 
    BN-fmmi outperforms the MFCC-fmmi baseline: 13.5%
 
    BN-fmmi vs DNN-HMM hybrid: xx%
 
  
 
  (2) Tencent test result: 70h training data(2 day, 15 machines, 10 threads), 88k LM, general test case:  
 
  (2) Tencent test result: 70h training data(2 day, 15 machines, 10 threads), 88k LM, general test case:  
第18行: 第15行:
 
dnn-2: 34%  9  frame window,  state-based tree
 
dnn-2: 34%  9  frame window,  state-based tree
  
 +
(3) GPU & CPU merge. Invesigate the possibility to merge GPU and
 +
CPU code. Try to find out an easier way. (1 week)
  
  (2) GPU & CPU merge. Invesigate the possibility to merge GPU and CPU code. Try to find out an easier way. (1 week)
+
  (4) L-1 sparse initial training.  
  
 
3.Kaldi/HTK merge
 
3.Kaldi/HTK merge
  
 
  (1) HTK2Kaldi: the tool with Kaldi does not work.
 
  (1) HTK2Kaldi: the tool with Kaldi does not work.
  (2) Kaldi2HTK: done with implementation. Testing 40% WER, failed. Need more investigation.
+
  (2) Kaldi2HTK: done with implementation. Testing?
  
 
4. Embedded progress
 
4. Embedded progress
第30行: 第29行:
 
(1). Some large performance (speed) degradation with the embedded platform(1/60).
 
(1). Some large performance (speed) degradation with the embedded platform(1/60).
  
(2). QA LM training. failed.
+
(2). Planning for sparse DNN.
 +
 
 +
(3). QA LM training, Mengyuan?

2013年4月19日 (五) 06:24的版本

1. Data sharing

(1) AM/lexicon/LM are shared.
(2) LM count files are still in transfering. 

2. DNN progress

(1) 400 hour BN model. 
(2) Tencent test result: 70h training data(2 day, 15 machines, 10 threads), 88k LM, general test case: 

gmmi-bmmi: 38.7% dnn-1: 28% 11 frame window, phone-based tree dnn-2: 34% 9 frame window, state-based tree

(3) GPU & CPU merge. Invesigate the possibility to merge GPU and 

CPU code. Try to find out an easier way. (1 week)

(4) L-1 sparse initial training. 

3.Kaldi/HTK merge

(1) HTK2Kaldi: the tool with Kaldi does not work.
(2) Kaldi2HTK: done with implementation. Testing?

4. Embedded progress

(1). Some large performance (speed) degradation with the embedded platform(1/60).

(2). Planning for sparse DNN.

(3). QA LM training, Mengyuan?