“2013-10-18”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
Noisy training
Noisy training
 
(相同用户的9个中间修订版本未显示)
第7行: 第7行:
 
=== Sparse DNN ===
 
=== Sparse DNN ===
  
* Optimal Brain Damage(OBD). Code ready, bug found.  
+
* Optimal Brain Damage(OBD). The initial test shows worse results in weight-cutting experiments compared with simple weight-based cutting.
  
 
=== Tencent exps ===
 
=== Tencent exps ===
第16行: 第16行:
  
 
1. With 863 clean test, by adding car & white noise at various levels, obtained significant performance improvement.
 
1. With 863 clean test, by adding car & white noise at various levels, obtained significant performance improvement.
 +
 +
* [[媒体文件:Noise-training-white-noise-test.png|car noise test]]
 +
* [[媒体文件:Noise-training-car-noise-test.png|white noise test]]
  
 
2. The test with both car & white noise benefits from the noisy training.
 
2. The test with both car & white noise benefits from the noisy training.
第21行: 第24行:
 
==Continuous LM ==
 
==Continuous LM ==
  
1. Lattice rescoring toolkit is ready.
+
* Lattice re-scoring toolkit is ready. However the toolkit is very slow for large lattices.
2. Rescoring is slow with some dense lattices.
+
* Now checking the code to improve efficiency.
 
+
  
 
==QA LM==
 
==QA LM==
  
1. use the QA word segment system
+
Just started. Jobs to do:
2. train the Q LM & QA ASR system
+
 
 +
# use the QA-oriented word segment system
 +
# train the Q LM with the QA data

2013年10月18日 (五) 09:33的最后版本

Data sharing

  • LM count files still undelivered!

DNN progress

Sparse DNN

  • Optimal Brain Damage(OBD). The initial test shows worse results in weight-cutting experiments compared with simple weight-based cutting.

Tencent exps

N/A


Noisy training

1. With 863 clean test, by adding car & white noise at various levels, obtained significant performance improvement.

2. The test with both car & white noise benefits from the noisy training.

Continuous LM

  • Lattice re-scoring toolkit is ready. However the toolkit is very slow for large lattices.
  • Now checking the code to improve efficiency.

QA LM

Just started. Jobs to do:

  1. use the QA-oriented word segment system
  2. train the Q LM with the QA data