“ASR:2015-06-08”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“==Speech Processing == === AM development === ==== Environment ==== * grid-15 and grid-11 run slowly or do not work ==== RNN AM==== * details at http://liuc.cslt.o...”为内容创建页面)
 
Zxw讨论 | 贡献
Speech Processing
 
(1位用户的3个中间修订版本未显示)
第3行: 第3行:
  
 
==== Environment ====
 
==== Environment ====
* grid-15 and grid-11 run slowly or do not work
+
*
 
+
 
==== RNN AM====
 
==== RNN AM====
* details at http://liuc.cslt.org/pages/rnnam.html
+
*morpheme RNN-zhiyuan
* Test monophone on RNN using dark-knowledge --Chao Liu
+
* run using wsj,MPE  --Chao Liu
+
* run bi-directon --Chao Liu 
+
* train RNN with dark knowledge transfer on AURORA4 --zhiyuan
+
:*http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=383--zhiyuan
+
  
==== Mic-Array ====
+
* ==== Mic-Array ====
 
* hold  
 
* hold  
 
* Change the prediction from  fbank to spectrum features
 
* Change the prediction from  fbank to spectrum features
第47行: 第41行:
 
* test large database with AMIDA
 
* test large database with AMIDA
 
* test hidden layer knowledge transfer--xuewei
 
* test hidden layer knowledge transfer--xuewei
 +
* test random last output layer when train MPE--zhiyuan
  
 
===bilingual recognition===
 
===bilingual recognition===
第62行: 第57行:
  
 
====W2V based document classification====
 
====W2V based document classification====
* APSP paper
+
* APSIPA paper
 
* CNN adapt to resolve the low resource problem
 
* CNN adapt to resolve the low resource problem
===Translation===
+
===Pair-wise LM===
* draft paper of journal  
+
* draft paper of journal
  
 
===Order representation ===
 
===Order representation ===

2015年6月10日 (三) 07:59的最后版本

Speech Processing

AM development

Environment

RNN AM

  • morpheme RNN-zhiyuan
  • ==== Mic-Array ====
  • hold
  • Change the prediction from fbank to spectrum features
  • investigate alpha parameter in time domian and frquency domain
  • ALPHA>=0, using data generated by reverber toolkit
  • consider theta
  • compute EER with kaldi

RNN-DAE(Deep based Auto-Encode-RNN)

  • hold
  • deliver to mengyuan

Speaker ID

  • DNN-based sid --Tian Lan

Ivector&Dvector based ASR

  • hold --Tian Lan
  • Cluster the speakers to speaker-classes, then using the distance or the posterior-probability as the metric
  • Direct using the dark-knowledge strategy to do the ivector training.
  • Ivector dimention is smaller, performance is better
  • Augument to hidden layer is better than input layer
  • train on wsj(testbase dev93+evl92)

Dark knowledge

  • Ensemble using 100h dataset to construct diffrernt structures -- Mengyuan
  • adaptation English and Chinglish
  • Try to improve the chinglish performance extremly
  • unsupervised training with wsj contributes to aurora4 model --Xiangyu Zeng
  • test large database with AMIDA
  • test hidden layer knowledge transfer--xuewei
  • test random last output layer when train MPE--zhiyuan

bilingual recognition

  • hold

language vector

  • train DNN with language vector--xuewei

Text Processing

RNN LM

  • character-lm rnn(hold)
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

W2V based document classification

  • APSIPA paper
  • CNN adapt to resolve the low resource problem

Pair-wise LM

  • draft paper of journal

Order representation

  • modify the objective function(hold)
  • sup-sampling method to solve the low frequence word(hold)
  • journal paper

binary vector

  • nips paper

Stochastic ListNet

  • done

relation classifier

  • done

plan to do

  • combine LDA with neural network