2014-10-13

来自cslt Wiki
2014年10月20日 (一) 06:30Zhangzy讨论 | 贡献的版本

跳转至: 导航搜索

Speech Processing

AM development

Contour

  • NAN problem
  • nan recurrence
  ------------------------------------------------------------
   grid/atr.  |   Reproducible  |    add.
  ------------------------------------------------------------
   grid-11    |     yes         |   
  ------------------------------------------------------------
   grid-12    |     no          | "nan" in different position
  ------------------------------------------------------------
   grid-14    |     yes         |  
  ------------------------------------------------------------

Sparse DNN

  • Performance improvement found when pruned slightly
  • Experiments show that
  • Suggest to use TIMIT / AURORA 4 for training

RNN AM

  • Initial test on WSJ , leads to out-memory.
  • Using AURORA 4 short-sentence with a smaller number of targets.

Noise training

  • First draft of the noisy training journal paper
  • Paper Correction (Yinshi, Liuchao, Lin Yiye), be going.

Drop out & Rectification & convolutive network

  • Drop out
  • dataset:wsj, testset:eval92
       std |  dropout0.4 | dropout0.5 | dropout0.7 | dropout0.8
    ------------------------------------------------------------- 
       4.5 |     5.39    |    4.80    |   4.36     |    -      
  • Test on noisy AURORA 4 dataset
  • Continue the droptout on normal trained XEnt NNET , eg wsj.
  • Draft the dropout-DNN weight distribution.
  • Rectification
  • Still NAN error, need to debug.
  • MaxOut
  • Convolutive network
  • Test more configurations
  • Yiye will work on CNN

Denoising & Farfield ASR

  • ICASSP paper submitted.

VAD

  • Add more silence tag "#" in pure-silence utterance text(train).
  • xEntropy model be training
  • Sum all sil-pdf as the silence posterior probability.

Speech rate training

  • Seems ROS model is superior to the normal one with faster speech
  • Need to check distribution of ROS on WSJ
  • Suggest to extract speech data of different ROS, construct a new test set
  • Suggest to use Tencent training data
  • Suggest to remove silence when compute ROS

low resource language AM training

  • Use Chinese NN as initial NN, change the last layer
  • Various the used Chinese trained DNN layer numbers.
    • feature_transform = 6000h_transform + 6000_N*hidden-layers
 nnet.init = random (4-N)*hidden-layers + output-layer

 | N / learn_rate | 0.008         | 0.001 | 0.0001 |
 |   baseline     | 17.00(14*2h)  |       |        |
 |       4        | 17.75(9*0.6h) | 18.64 |        |
 |       3        | 16.85         |       |        |
 |       2        | 16.69         |       |        |
 |       1        | 16.87         |       |        |
 |       0        | 16.88         |       |        |  
    • feature_transform = uyghur_transform + 6000_N*hidden-layers

nnet.init = random (4-N)*hidden-layers + output-layer

  • Note: This is reproduced Yinshi's experiment

| N / learn_rate | 0.008 | 0.001 | 0.0001 | | baseline | 17.00 | | | | 4 | 28.23 | 30.72 | 37.32 | | 3 | 22.40 | | | | 2 | 19.76 | | | | 1 | 17.41 | | | | 0 | | | |

    • feature_transform = 6000_transform + 6000_N*hidden-layers

nnet.init = uyghur (4-N)*hidden-layers + output-layer | N / learn_rate | 0.008 | 0.001 | 0.0001 | | baseline | 17.00 | | | | 4 | 17.80 | 18.55 | 21.06 | | 3 | 16.89 | 17.64 | | | 2 | | | | | 1 | | | | | 0 | | | |

Scoring

  • global scoring done.
  • Pitch & rhythm done, need testing
  • Harmonics program done, experiment to be done.

Confidence

  • Reproduce the experiments on fisher dataset.
  • Use the fisher DNN model to decode all-wsj dataset


Speaker ID

  • Preparing GMM-based server.

Emotion detection

  • Sinovoice is implementing the server


Text Processing

LM development

Domain specific LM

h2. ngram generation is on going h2. look the memory and baidu_hi done

h2. NUM tag LM:

  • maxi work is released.
  • yuanbin continue the tag lm work.
  • add the ner to tag lm .
  • Boost specific words like wifi if TAG model does not work for a particular word.


Word2Vector

W2V based doc classification

  • Initial results variable Bayesian GMM obtained. Performance is not as good as the conventional GMM.
  • Non-linear inter-language transform: English-Spanish-Czch: wv model training done, transform model on investigation
  • SSA-based local linear mapping still on running.
  • k-means classes change to 2.
  • Knowledge vector started
  • format the data
  • Character to word conversion
  • prepare the task: word similarity
  • prepare the dict.
  • Google word vector train
  • improve the sampling method

RNN LM

  • rnn
  • lstm+rnn
install the tool and prepare the data of wsj
prepare the baseline.

Translation

  • v3.0 demo released
  • still slow
  • re-segment the word using new dictionary.
  • check new data.

QA

  • search method:
  • add the vsm and BM25 to improve the search. and the strategy of selecting the answer
  • segment the word using minimum granularity for lucene index and bag-of-words method.
  • new inter will install SEMPRE