“ASR:2015-04-13”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
Speech Processing
Zxw讨论 | 贡献
Speech Processing
 
第4行: 第4行:
 
==== Environment ====
 
==== Environment ====
 
* grid-11 often shut down automatically, too slow computation speed.
 
* grid-11 often shut down automatically, too slow computation speed.
* add a server
+
* add a server(760)
  
 
==== RNN AM====
 
==== RNN AM====
第14行: 第14行:
 
==== Mic-Array ====
 
==== Mic-Array ====
 
* investigate alpha parameter in time domian and frquency domain  
 
* investigate alpha parameter in time domian and frquency domain  
* ALPHA>=0
+
* ALPHA>=0, using data generated by reverber toolkit
 +
* consider theta
  
  
第28行: 第29行:
 
===Speaker ID===   
 
===Speaker ID===   
 
:* DNN-based sid --Yiye
 
:* DNN-based sid --Yiye
:* Decode --Yiye
 
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327
  
 
===Ivector based ASR===
 
===Ivector based ASR===
 +
*hold
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340
 
:* Ivector dimention is smaller, performance is better
 
:* Ivector dimention is smaller, performance is better
 
:* Augument to hidden layer is better than input layer
 
:* Augument to hidden layer is better than input layer
 
:* train on wsj(testbase dev93+evl92)
 
:* train on wsj(testbase dev93+evl92)
 +
 +
===Dark knowledge===
 +
:*http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=264 --zhiyong
 +
:* trial on logit matching faild --mengyuan
 +
:* adaptation for chinglish under investigation-mengyuan
 +
:* unsupervised training with wsj contributes to aurora4 model--xiangyu
 +
:* test large database with amida--xiangyu
 +
 +
===bilingual recognition===
 +
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=359--zhiyuan
  
 
==Text Processing==
 
==Text Processing==

2015年4月15日 (三) 07:42的最后版本

Speech Processing

AM development

Environment

  • grid-11 often shut down automatically, too slow computation speed.
  • add a server(760)

RNN AM


Mic-Array

  • investigate alpha parameter in time domian and frquency domain
  • ALPHA>=0, using data generated by reverber toolkit
  • consider theta


Convolutive network

  • HOLD
  • CNN + DNN feature fusion

RNN-DAE(Deep based Auto-Encode-RNN)


Speaker ID

Ivector based ASR

  • hold

Dark knowledge

bilingual recognition

Text Processing

tag LM

  • similar word extension in FST
  • will check the formula using Bayes and experiment
  • fixed the bug using the big-lm.
  • will add more test data
  • will test the baseline(no weight) and different weight method

RNN LM

  • rnn
  • code the character-lm using Theano
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

W2V based document classification

  • some result about VMF model [1]
  • will try max-method

Translation

  • v5.0 demo released
  • cut the dict and use new segment-tool

Sparse NN in NLP

  • test the drop-out model and the performance gets a little improvement, need some result:
  • test the order feature

online learning

  • data is ready.prepare the ACL paper
  • modified the listNet SGD
  • finish some test.
  • test the result on different time.

relation classifier

  • modified the drop-out method