“2013-04-12”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(2位用户的4个中间修订版本未显示)
第8行: 第8行:
  
 
  (1) 400 hour BN training is done. MFCC+LDA (300/1200/1200/1220/40/1200/38xx), followed by (MFCC+BN) with LDA.  
 
  (1) 400 hour BN training is done. MFCC+LDA (300/1200/1200/1220/40/1200/38xx), followed by (MFCC+BN) with LDA.  
  (2) comparision between MFCC and BN (fmpe applied). Relative improvement is xx%.  
+
  (2) comparision between MFCC and BN (fmpe applied). Relative improvement is 17.8% on chslm-biglm and 13.2% on chslm.  
  (3) BN system and hybrind system: relative perforamnce comparision %xx vs %xx.  
+
  (3) BN system and hybrind system: relative perforamnce improvement comparision:9.8% for hybrid, 6.6% for bn, 9.7% for bn+mfcc.  
 
  (4) GPU and CPU style comparision: still on progress. Working on data checking. SGE is still problematic (Chao can help). Hopefully done in 1 or two weeks.
 
  (4) GPU and CPU style comparision: still on progress. Working on data checking. SGE is still problematic (Chao can help). Hopefully done in 1 or two weeks.
  (5) RTF comparision between DNN hybrid and GMM: %xx vs %xx.
+
  (5) RTF comparision between DNN hybrid and GMM: 0.57 vs 0.36.
  
  
第26行: 第26行:
  
 
(3). QA LM training, word files ready. trying to start the LM training. Refer to the doc Chao provided.
 
(3). QA LM training, word files ready. trying to start the LM training. Refer to the doc Chao provided.
 +
 
/nfs/asrhome/asr/lm/chs.lm/lm.qa
 
/nfs/asrhome/asr/lm/chs.lm/lm.qa

2013年4月19日 (五) 05:40的最后版本

1. Data sharing

(1) Acoustic data ready. The feature is PLP with HLDA, and the model is PLP+HLDA+MPE. All softwares ready. 
(2)The LM data and model are being transferred.

2. DNN progress

(1) 400 hour BN training is done. MFCC+LDA (300/1200/1200/1220/40/1200/38xx), followed by (MFCC+BN) with LDA. 
(2) comparision between MFCC and BN (fmpe applied). Relative improvement is 17.8% on chslm-biglm and 13.2% on chslm. 
(3) BN system and hybrind system: relative perforamnce improvement comparision:9.8% for hybrid, 6.6% for bn, 9.7% for bn+mfcc. 
(4) GPU and CPU style comparision: still on progress. Working on data checking. SGE is still problematic (Chao can help). Hopefully done in 1 or two weeks.
(5) RTF comparision between DNN hybrid and GMM: 0.57 vs 0.36.


3.Kaldi/HTK merge

(1) HTK2Kaldi: The tool kaldi delivered is problematic. The HMM structure seems erroratic. Need to make correction (hopefully in 1 week).
(2) Kaldi2HTK: need to design a new tool (possibley in 1 week). 

4. Embedded progress

(1). GFCC training/testing. The GFCC seems highly robust to noise, while not as good as MFCC in silence.

(2). Prototype design. Application design is on going. Plan to deliver DNN decoder. Sparse DNN might be a good solution.

(3). QA LM training, word files ready. trying to start the LM training. Refer to the doc Chao provided.

/nfs/asrhome/asr/lm/chs.lm/lm.qa