“ASR:2015-05-18”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
relation classifier
Lr讨论 | 贡献
binary vector
 
(相同用户的3个中间修订版本未显示)
第65行: 第65行:
 
* CNN adapt to resolve the low resource problem
 
* CNN adapt to resolve the low resource problem
 
===Translation===
 
===Translation===
* similar-pair method in English word using translation model.
+
*Test the performance of the similar-pair method in bilingual recognition
:* result:wer:70%-50% on top1.
+
:* change the AM model
+
  
 
===Order representation ===
 
===Order representation ===
 
* modify the objective function
 
* modify the objective function
 
* sup-sampling method to solve the low frequence word
 
* sup-sampling method to solve the low frequence word
 +
* Sort out vectors and do the experiment on objective function convergence
 +
* test on classification task and prediction task
 +
 
===binary vector===
 
===binary vector===
 +
*Finish hamming metric binary vector.
 +
*Try to finish binary vector.
 +
*Do  test report.
  
 
===Stochastic ListNet===
 
===Stochastic ListNet===
* using sampling method and test
+
*To finish writing first edition of emnlp 2015 long paper
  
 
===relation classifier===
 
===relation classifier===

2015年5月25日 (一) 00:59的最后版本

Speech Processing

AM development

Environment

  • grid-15 often does not work
  • grid-14 often does not work

RNN AM

  • details at http://liuc.cslt.org/pages/rnnam.html
  • Test monophone on RNN using dark-knowledge --Chao Liu
  • run using wsj,MPE --Chao Liu
  • run bi-directon --Chao Liu
  • train RNN with dark knowledge transfer on AURORA4 --zhiyuan

Mic-Array

  • hold
  • Change the prediction from fbank to spectrum features
  • investigate alpha parameter in time domian and frquency domain
  • ALPHA>=0, using data generated by reverber toolkit
  • consider theta
  • compute EER with kaldi

RNN-DAE(Deep based Auto-Encode-RNN)

  • deliver to mengyuan

Speaker ID

  • DNN-based sid --Yiye Lin

Ivector&Dvector based ASR

  • hold --Tian Lan
  • Cluster the speakers to speaker-classes, then using the distance or the posterior-probability as the metric
  • Direct using the dark-knowledge strategy to do the ivector training.
  • Ivector dimention is smaller, performance is better
  • Augument to hidden layer is better than input layer
  • train on wsj(testbase dev93+evl92)

Dark knowledge

  • Ensemble using 100h dataset to construct diffrernt structures -- Mengyuan
  • adaptation English and Chinglish
  • Try to improve the chinglish performance extremly
  • unsupervised training with wsj contributes to aurora4 model --Xiangyu Zeng
  • test large database with AMIDA
  • test hidden layer knowledge transfer--xuewei

bilingual recognition

  • hold

language vector

  • train DNN with language vector--xuewei

Text Processing

RNN LM

  • character-lm rnn(hold)
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

W2V based document classification

  • make a technical report about document classification using CNN --yiqiao
  • CNN adapt to resolve the low resource problem

Translation

  • Test the performance of the similar-pair method in bilingual recognition

Order representation

  • modify the objective function
  • sup-sampling method to solve the low frequence word
  • Sort out vectors and do the experiment on objective function convergence
  • test on classification task and prediction task

binary vector

  • Finish hamming metric binary vector.
  • Try to finish binary vector.
  • Do test report.

Stochastic ListNet

  • To finish writing first edition of emnlp 2015 long paper

relation classifier

  • Tune the best model.
  • Train on new wordembedding.
  • Do some analysis(length of context, track the pooling.)
  • Finish the draft.

plan to do

  • combine LDA with neural network