“2013-10-18”版本间的差异
来自cslt Wiki
(→QA LM) |
(→Noisy training) |
||
(相同用户的6个中间修订版本未显示) | |||
第17行: | 第17行: | ||
1. With 863 clean test, by adding car & white noise at various levels, obtained significant performance improvement. | 1. With 863 clean test, by adding car & white noise at various levels, obtained significant performance improvement. | ||
− | * car noise test | + | * [[媒体文件:Noise-training-white-noise-test.png|car noise test]] |
− | + | * [[媒体文件:Noise-training-car-noise-test.png|white noise test]] | |
− | + | ||
2. The test with both car & white noise benefits from the noisy training. | 2. The test with both car & white noise benefits from the noisy training. | ||
第25行: | 第24行: | ||
==Continuous LM == | ==Continuous LM == | ||
− | + | * Lattice re-scoring toolkit is ready. However the toolkit is very slow for large lattices. | |
− | + | * Now checking the code to improve efficiency. | |
− | + | ||
==QA LM== | ==QA LM== |
2013年10月18日 (五) 09:33的最后版本
目录
Data sharing
- LM count files still undelivered!
DNN progress
Sparse DNN
- Optimal Brain Damage(OBD). The initial test shows worse results in weight-cutting experiments compared with simple weight-based cutting.
Tencent exps
N/A
Noisy training
1. With 863 clean test, by adding car & white noise at various levels, obtained significant performance improvement.
2. The test with both car & white noise benefits from the noisy training.
Continuous LM
- Lattice re-scoring toolkit is ready. However the toolkit is very slow for large lattices.
- Now checking the code to improve efficiency.
QA LM
Just started. Jobs to do:
- use the QA-oriented word segment system
- train the Q LM with the QA data