“ASR:2015-04-20”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
Dark knowledge
 
第37行: 第37行:
 
:* Ensemble
 
:* Ensemble
 
::*http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=264 --Zhiyong Zhang
 
::*http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=264 --Zhiyong Zhang
:* adaptation for chinglish under investigation  -- Mengyuan Zhao
+
:* adaptation for chinglish under investigation  --Mengyuan Zhao
 
::* Try to improve the chinglish performance extremly
 
::* Try to improve the chinglish performance extremly
 
:* unsupervised training with wsj contributes to aurora4 model --Xiangyu Zeng
 
:* unsupervised training with wsj contributes to aurora4 model --Xiangyu Zeng
::* test large database with AMIDA  
+
::* test large database with AMIDA
  
 
===bilingual recognition===
 
===bilingual recognition===

2015年4月22日 (三) 08:49的最后版本

Speech Processing

AM development

Environment

  • grid-11 often shut down automatically, too slow computation speed.
  • New grid-13 added, using gpu970
  • To update the wiki enviroment infomation

RNN AM

Mic-Array

  • Change the prediction from fbank to spectrum features
  • investigate alpha parameter in time domian and frquency domain
  • ALPHA>=0, using data generated by reverber toolkit
  • consider theta

RNN-DAE(Deep based Auto-Encode-RNN)

Speaker ID

Ivector&Dvector based ASR

Dark knowledge

  • Ensemble
  • adaptation for chinglish under investigation --Mengyuan Zhao
  • Try to improve the chinglish performance extremly
  • unsupervised training with wsj contributes to aurora4 model --Xiangyu Zeng
  • test large database with AMIDA

bilingual recognition

Text Processing

tag LM

  • similar word extension in FST
  • will check the formula using Bayes and experiment
  • add similarity weight

RNN LM

  • rnn
  • test the ppl and code the character-lm
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

W2V based document classification

  • result about norm model [1]
  • try CNN model

Translation

  • v5.0 demo released
  • cut the dict and use new segment-tool

Sparse NN in NLP

  • sparse-nn on 1000 dimension(le-6,0.705236) is better than 200 dimension(le-12,0.694678).

online learning

  • modified the listNet SGD

relation classifier

  • check the CNN code and contact the author of paper