“2013-09-27”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第10行: 第10行:
 
* An interesting investigation is drop-out 50% weights after each iteration, and then re-training without sticky.  
 
* An interesting investigation is drop-out 50% weights after each iteration, and then re-training without sticky.  
  
Report on [http://192.168.0.50:3000/series/?q=&action=view&series[]=91&series[]=91.0&series[]=91.1&series[]=91.2&series[]=91.3&series[]=91.4&series[]=91.5&series[]=91.6&series[]=91.7&series[]=91.8&series[]=91.9|graph]
+
Report on [http://192.168.0.50:3000/series/?action=view&series=91,91.0,91.1,91.2,91.3,91.4,91.5,91.6,91.7,91.8,91.9 here]
  
 
=== FBank features ===
 
=== FBank features ===
  
* CMN shows similar impact to MFCC & FBank. Since MFCC involves summary of various random channels, the mean and covariance of the dimensions are less random. This leads to two possible impacts: first, the dimensions are relatively stable therefore CMVN does not contribute much; on other hand, estimation of mean and variance is more accurate so CMVN leads to more reliable results. This means CMVN leads to unpredictable performance improvement for MFCC & Fbank, depending on the data set.
+
1000 hour testing: [http://192.168.0.50:3000/series/?action=view&series=97,97.0,97.1 click]
  
[http://192.168.0.50:3000/series/?q=&action=view&series=53%2C51%2C45%2C44%2C33%2C32%2C31%2C29&chart_type=bar Performance chart]
+
=== Tencent exps ===
 +
N/A
  
* Choose various Fbank dimension, keep LDA output dimension as 100. FB30 seems the best.
 
[http://192.168.0.50:3000/series/?q=&action=view&series=36%2C34%2C29&chart_type=bar Performance chart]
 
  
* Choose FBank 40, test various LDA output dimension. The results show LDA is still helpful, and dimension 200 is sufficient.
+
==Noisy training ==
  
[http://192.168.0.50:3000/series/?q=&action=view&series=56%2C54%2C43%2C36&chart_type=bar Performance chart]
+
Sample noise segments randomly for each utterance. Using Dirichlet to sample noise distribution on various types, and use Gaussian to sample SNR.
  
* We need to investigate non-linear discriminative approach which is simple but leads to less information lost.
+
White noise with car noise are 1/3 respectively in the base distribution. The performance report is here:
* We can also test a simple 'the same dimension DCT'. If the performance is still worse than FB, we confirm that the problem is due to noisy channel accumulation.
+
* Need to investigate Gammatone filter banks. The same idea as FB, that we want to keep the information as much as possible. And it is possible to combine FB and GFB to pursue a better performance.
+
  
=== Tencent exps ===
+
[http://192.168.0.50:3000/series/?action=view series=100,99,99.0,99.1,99.2,99.3,96,96.4,96.5,96.6,96.7 click]
N/A
+
  
==DNN Confidence estimation==
+
The conclusions is that:
  
* Lattice-based confidence show better performance with DNN with before.  
+
1. by sampling noises, most of the noise patterns can be learned efficiently and thus improve performance on noisy test data.
* Accumulated DNN confidence is done. The confidence values are much more reasonable.
+
2. by sampling noises with high variance, performance on clean speech is largely remained.
* Prepare MLP/DNN-based confidence integration.
+
  
==Noisy training ==
+
==Continuous LM ==
 +
 
 +
1. SogouQ n-gram building: 500M text data, 110k words. Two tests:
 +
 
 +
(1) using Tencent online1 and online2 transcription: online1 1651 online2: 1512
 +
(2) using 70k sogouQ test set : ppl 33
  
* We trained model with a random noise approach, which samples half of the training data and add 15db white noise. We hope this rand-noise learning will improve the performance of data in noise while keeping the discriminative power of the model in clean speech.  
+
  This means the SogouQ text is significantly different from the online1 and online2 Tencent set, due to the highly different domain.
  
[http://192.168.0.50:3000/series/?q=&action=view&series=76%2C76.0%2C76.1%2C76.2%2C76.3%2C74%2C73%2C72%2C71%2C45&chart_type=bar performance chat]
+
2. NN LM
  
* The results are largely consistent with our expectation, that the performance on noisy data were greatly improved, while the performance on clean speech is not hurted much.  
+
  Using 11k words as input, 192 hidden layer. 500M text data from QA data. test with online2 transcription.
  
* We are looking forward to the noisy training which introduces some noises randomly online in training.
+
  (1)  Take 1-1024 from NN LM, and others predicted by 4-gram. n-gram baseline: 402.37; NN+ngram: 122.54
 +
  (2)  Take 1-2048 from NN LM, and others predicted by 4-gram. n-gram baseline: 402.37; NN+ngram: 127.59
 +
  (3)  Take 1024-2048 from NN LM, and others predicted by 4-gram. n-gram baseline: 402.37; NN+ngram: 118.92
  
* Car noise training. It shows limited impact of car noise.
 
  
[http://192.168.0.50:3000/series/?q=&action=view&series=78%2C78.0%2C78.1%2C78.2%2C78.3%2C45&chart_type=bar Performance chart]
+
Conclusions: NN  LM is extremely good than n-gram, due to its smooth capacity.

2013年9月27日 (五) 03:00的版本

Data sharing

  • LM count files still undelivered!

DNN progress

Sparse DNN

  • Optimal Brain Damage based sparsity is on going. Prepare the algorithm.
  • An interesting investigation is drop-out 50% weights after each iteration, and then re-training without sticky.

Report on here

FBank features

1000 hour testing: click

Tencent exps

N/A


Noisy training

Sample noise segments randomly for each utterance. Using Dirichlet to sample noise distribution on various types, and use Gaussian to sample SNR.

White noise with car noise are 1/3 respectively in the base distribution. The performance report is here:

series=100,99,99.0,99.1,99.2,99.3,96,96.4,96.5,96.6,96.7 click

The conclusions is that:

1. by sampling noises, most of the noise patterns can be learned efficiently and thus improve performance on noisy test data. 2. by sampling noises with high variance, performance on clean speech is largely remained.

Continuous LM

1. SogouQ n-gram building: 500M text data, 110k words. Two tests:

(1) using Tencent online1 and online2 transcription: online1 1651 online2: 1512
(2) using 70k sogouQ test set : ppl 33
 This means the SogouQ text is significantly different from the online1 and online2 Tencent set, due to the highly different domain.

2. NN LM

  Using 11k words as input, 192 hidden layer. 500M text data from QA data. test with online2 transcription.
 (1)  Take 1-1024 from NN LM, and others predicted by 4-gram. n-gram baseline: 402.37; NN+ngram: 122.54
 (2)  Take 1-2048 from NN LM, and others predicted by 4-gram. n-gram baseline: 402.37; NN+ngram: 127.59
 (3)  Take 1024-2048 from NN LM, and others predicted by 4-gram. n-gram baseline: 402.37; NN+ngram: 118.92


Conclusions: NN LM is extremely good than n-gram, due to its smooth capacity.