“ASR:2015-05-04”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“==Speech Processing == === AM development === ==== Environment ==== * grid-11 often shut down automatically, too slow computation speed. * New grid-13 added, using...”为内容创建页面)
 
Zxw讨论 | 贡献
Speech Processing
第3行: 第3行:
  
 
==== Environment ====
 
==== Environment ====
* grid-11 often shut down automatically, too slow computation speed.
+
* grid-15 often does not work
* New grid-13 added, using gpu970
+
* To update the wiki enviroment infomation
+
  
 
==== RNN AM====
 
==== RNN AM====
 
* details at http://liuc.cslt.org/pages/rnnam.html
 
* details at http://liuc.cslt.org/pages/rnnam.html
* Test monophone on RNN using dark-knowledge
+
* Test monophone on RNN using dark-knowledge --Chao Liu
* run using wsj,MPE   
+
* run using wsj,MPE  --Chao Liu
 +
* run bi-directon --Chao Liu 
 +
* modify code --Zhiyuan
  
 
==== Mic-Array ====
 
==== Mic-Array ====
第17行: 第17行:
 
* ALPHA>=0, using data generated by reverber toolkit
 
* ALPHA>=0, using data generated by reverber toolkit
 
* consider theta
 
* consider theta
 +
* compute EER with kaldi
  
 
====RNN-DAE(Deep based Auto-Encode-RNN)====
 
====RNN-DAE(Deep based Auto-Encode-RNN)====
第27行: 第28行:
  
 
===Ivector&Dvector based ASR===
 
===Ivector&Dvector based ASR===
 +
*  hold    --Tian Lan
 
:* Cluster the speakers to speaker-classes, then using the distance or the posterior-probability as the metric
 
:* Cluster the speakers to speaker-classes, then using the distance or the posterior-probability as the metric
 
:* Direct using the dark-knowledge strategy to do the ivector training.
 
:* Direct using the dark-knowledge strategy to do the ivector training.
第35行: 第37行:
  
 
===Dark knowledge===
 
===Dark knowledge===
:* Ensemble
+
:* Ensemble using 100h dataset to construct diffrernt structures -- Mengyuan
 
::*http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=264 --Zhiyong Zhang
 
::*http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=264 --Zhiyong Zhang
 
:* adaptation for chinglish under investigation  --Mengyuan Zhao
 
:* adaptation for chinglish under investigation  --Mengyuan Zhao
第43行: 第45行:
  
 
===bilingual recognition===
 
===bilingual recognition===
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=359 --Zhiyuan Tang
+
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=359 --Zhiyuan Tang and Mengyuan
  
 
==Text Processing==
 
==Text Processing==

2015年5月6日 (三) 08:02的版本

Speech Processing

AM development

Environment

  • grid-15 often does not work

RNN AM

Mic-Array

  • Change the prediction from fbank to spectrum features
  • investigate alpha parameter in time domian and frquency domain
  • ALPHA>=0, using data generated by reverber toolkit
  • consider theta
  • compute EER with kaldi

RNN-DAE(Deep based Auto-Encode-RNN)

Speaker ID

Ivector&Dvector based ASR

  • hold --Tian Lan

Dark knowledge

  • Ensemble using 100h dataset to construct diffrernt structures -- Mengyuan
  • adaptation for chinglish under investigation --Mengyuan Zhao
  • Try to improve the chinglish performance extremly
  • unsupervised training with wsj contributes to aurora4 model --Xiangyu Zeng
  • test large database with AMIDA

bilingual recognition

Text Processing

tag LM

  • similar word extension in FST
  • will check the formula using Bayes and experiment
  • add similarity weight

RNN LM

  • rnn
  • test the ppl and code the character-lm
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

W2V based document classification

  • result about norm model [1]
  • try CNN model

Translation

  • v5.0 demo released
  • cut the dict and use new segment-tool

Sparse NN in NLP

  • sparse-nn on 1000 dimension(le-6,0.705236) is better than 200 dimension(le-12,0.694678).

online learning

  • modified the listNet SGD

relation classifier

  • check the CNN code and contact the author of paper