“2013-04-26”版本间的差异
来自cslt Wiki
第7行: | 第7行: | ||
===400 hour DNN training=== | ===400 hour DNN training=== | ||
{| class="wikitable" | {| class="wikitable" | ||
− | !Test Set!! Tencent Baseline!! bMMI!! fMMI !! BN !! Hybrid | + | !Test Set!! Tencent Baseline!! bMMI!! fMMI !! BN(with fMMI) !! Hybrid |
|- | |- | ||
− | |1900||8.4 || 7.65 || 7.35||6.57 | + | |1900||8.4 || 7.65 || 7.35||6.57 || 7.27 |
|- | |- | ||
− | |2044|| 22.4 ||24.44|| 24.03||21.77 | + | |2044|| 22.4 ||24.44|| 24.03||21.77 || 20.24 |
|- | |- | ||
− | |online1||35.6 ||34.66||34.33||31.44 | + | |online1||35.6 ||34.66||34.33||31.44 || 30.53 |
|- | |- | ||
− | |online2||29.6 ||27.23||26.80||24.10 | + | |online2||29.6 ||27.23||26.80||24.10 || 23.89 |
|- | |- | ||
− | |map||24.5|| 27.54||27.69||23.79 | + | |map||24.5|| 27.54||27.69||23.79 || 22.46 |
|- | |- | ||
− | |notepad||16|| 19.81||21.75||15.81 | + | |notepad||16|| 19.81||21.75||15.81 || 12.74 |
|- | |- | ||
− | |general||36|| 38.52||38.90||33.61 | + | |general||36|| 38.52||38.90||33.61 || 31.55 |
|- | |- | ||
− | |speedup||26.8||27.88||26.81||22.82 | + | |speedup||26.8||27.88||26.81||22.82 || 22.00 |
|- | |- | ||
|} | |} | ||
+ | |||
*Tencent baseline is with 700h online data+ 700h 863 data, HLDA+MPE, 88k lexicon | *Tencent baseline is with 700h online data+ 700h 863 data, HLDA+MPE, 88k lexicon | ||
*Our results are with 400 hour AM, 88k LM. ML+bMMI | *Our results are with 400 hour AM, 88k LM. ML+bMMI |
2013年4月26日 (五) 05:47的版本
目录
Data sharing
- AM/lexicon/LM are shared.
- LM count files are still in transfering.
DNN progress
400 hour DNN training
Test Set | Tencent Baseline | bMMI | fMMI | BN(with fMMI) | Hybrid |
---|---|---|---|---|---|
1900 | 8.4 | 7.65 | 7.35 | 6.57 | 7.27 |
2044 | 22.4 | 24.44 | 24.03 | 21.77 | 20.24 |
online1 | 35.6 | 34.66 | 34.33 | 31.44 | 30.53 |
online2 | 29.6 | 27.23 | 26.80 | 24.10 | 23.89 |
map | 24.5 | 27.54 | 27.69 | 23.79 | 22.46 |
notepad | 16 | 19.81 | 21.75 | 15.81 | 12.74 |
general | 36 | 38.52 | 38.90 | 33.61 | 31.55 |
speedup | 26.8 | 27.88 | 26.81 | 22.82 | 22.00 |
- Tencent baseline is with 700h online data+ 700h 863 data, HLDA+MPE, 88k lexicon
- Our results are with 400 hour AM, 88k LM. ML+bMMI
Tencent test result
- AM: 70h training data(2 day, 15 machines, 10 threads)
- LM: 88k LM
- Test case: general
Feature | GMM-bMMI | DNN | DNN-MMI | PLP(-5,+5) | 38.4 | 26.5 | 23.8 |
---|---|---|---|---|---|---|---|
PLP+LDA+MLLT(-5,+5) | 38.4 | 28.7 |
GPU & CPU merge
- Invesigate the possibility to merge GPU and CPU code. GPU computing code merged to CPU.
L-1 sparse initial training
- Start to investigating.
Kaldi/HTK merge
- HTK2Kaldi: hold.
- Kaldi2HTK: done with implementation. Performance improved.
Embedded progress
- PocketSphinx migration done. Very slow.
- QA LM training, done.