“ASR:2015-06-15”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
RNN AM
Zxw讨论 | 贡献
Speech Processing
第6行: 第6行:
 
==== RNN AM====
 
==== RNN AM====
 
*morpheme RNN-zhiyuan
 
*morpheme RNN-zhiyuan
 +
*RNN MPE --zhiyuan and xuewei
  
 
==== Mic-Array ====
 
==== Mic-Array ====
第21行: 第22行:
  
 
===Speaker ID===   
 
===Speaker ID===   
*  DNN-based sid --Tian Lan
+
*  DNN-based sid --Lantian
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327
  
 
===Ivector&Dvector based ASR===
 
===Ivector&Dvector based ASR===
* hold --Tian Lan  
+
* hold --Tian Lan  
 
* Cluster the speakers to speaker-classes, then using the distance or the posterior-probability as the metric
 
* Cluster the speakers to speaker-classes, then using the distance or the posterior-probability as the metric
* Direct using the dark-knowledge strategy to do the ivector training.
+
* dark-konowlege using i-vector
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340
+
* Ivector dimention is smaller, performance is better
+
* Augument to hidden layer is better than input layer
+
 
* train on wsj(testbase dev93+evl92)
 
* train on wsj(testbase dev93+evl92)
 +
:*--hold
  
 
===Dark knowledge===
 
===Dark knowledge===
* Ensemble using 100h dataset to construct diffrernt structures -- Mengyuan
 
:*http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=264 --Zhiyong Zhang
 
* adaptation English and Chinglish
 
:* Try to improve the chinglish performance extremly
 
* unsupervised training with wsj contributes to aurora4 model --Xiangyu Zeng
 
* test large database with AMIDA
 
* test hidden layer knowledge transfer--xuewei
 
 
* test random last output layer when train MPE--zhiyuan
 
* test random last output layer when train MPE--zhiyuan
  
 
===bilingual recognition===
 
===bilingual recognition===
* hold
+
* imbalance dataset(10h,100h and 1400h) to train without share--Zhiyong and mengyuan
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=359 --Zhiyuan Tang and Mengyuan
+
* record utterances mixed with Chinese and English
  
 
===language vector===
 
===language vector===
* train DNN with language vector--xuewei
+
* hold --xuewei
 +
 
 +
===DNN MPE===
 +
* random the last layer, train MPE--zhiyuan
  
 
==Text Processing==
 
==Text Processing==

2015年6月17日 (三) 06:33的版本

Speech Processing

AM development

Environment

RNN AM

  • morpheme RNN-zhiyuan
  • RNN MPE --zhiyuan and xuewei

Mic-Array

  • hold
  • Change the prediction from fbank to spectrum features
  • investigate alpha parameter in time domian and frquency domain
  • ALPHA>=0, using data generated by reverber toolkit
  • consider theta
  • compute EER with kaldi

RNN-DAE(Deep based Auto-Encode-RNN)

  • hold
  • deliver to mengyuan

Speaker ID

  • DNN-based sid --Lantian

Ivector&Dvector based ASR

  • hold --Tian Lan
  • Cluster the speakers to speaker-classes, then using the distance or the posterior-probability as the metric
  • dark-konowlege using i-vector
  • train on wsj(testbase dev93+evl92)
  • --hold

Dark knowledge

  • test random last output layer when train MPE--zhiyuan

bilingual recognition

  • imbalance dataset(10h,100h and 1400h) to train without share--Zhiyong and mengyuan
  • record utterances mixed with Chinese and English

language vector

  • hold --xuewei

DNN MPE

  • random the last layer, train MPE--zhiyuan

Text Processing

RNN LM

  • character-lm rnn(hold)
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

W2V based document classification

  • APSIPA paper
  • CNN adapt to resolve the low resource problem

Pair-wise LM

  • draft paper of journal

Order representation

  • modify the objective function(hold)
  • sup-sampling method to solve the low frequence word(hold)
  • journal paper

binary vector

  • nips paper

Stochastic ListNet

  • done

relation classifier

  • done

plan to do

  • combine LDA with neural network