“ASR:2015-04-08”版本间的差异
来自cslt Wiki
(清空页面) |
|||
第1行: | 第1行: | ||
+ | ==Speech Processing == | ||
+ | === AM development === | ||
+ | ==== Environment ==== | ||
+ | * grid-11 often shut down automatically, too slow computation speed. | ||
+ | |||
+ | |||
+ | ==== RNN AM==== | ||
+ | * details at http://liuc.cslt.org/pages/rnnam.html | ||
+ | * tuning parameters on monophone NN | ||
+ | * run using wsj,MPE | ||
+ | |||
+ | |||
+ | ==== Mic-Array ==== | ||
+ | * investigate alpha parameter in time domian and frquency domain | ||
+ | * ALPHA>=0 | ||
+ | |||
+ | |||
+ | ====Convolutive network==== | ||
+ | * HOLD | ||
+ | :* CNN + DNN feature fusion | ||
+ | |||
+ | ====RNN-DAE(Deep based Auto-Encode-RNN)==== | ||
+ | * HOLD -Zhiyong | ||
+ | * http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261 | ||
+ | |||
+ | |||
+ | ===Speaker ID=== | ||
+ | :* DNN-based sid --Yiye | ||
+ | :* Decode --Yiye | ||
+ | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327 | ||
+ | |||
+ | ===Ivector based ASR=== | ||
+ | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340 | ||
+ | :* Ivector dimention is smaller, performance is better | ||
+ | :* Augument to hidden layer is better than input layer | ||
+ | :* train on wsj(testbase dev93+evl92) | ||
+ | |||
+ | ==Text Processing== | ||
+ | ===tag LM=== | ||
+ | * similar word extension in FST | ||
+ | :* check the formula using Bayes and experiment | ||
+ | |||
+ | ====RNN LM==== | ||
+ | *rnn | ||
+ | :* code the character-lm using Theano | ||
+ | *lstm+rnn | ||
+ | :* check the lstm-rnnlm code about how to Initialize and update learning rate.(hold) | ||
+ | |||
+ | ====W2V based doc classification==== | ||
+ | * corpus ready | ||
+ | * learn some benchmark. | ||
+ | |||
+ | ===Translation=== | ||
+ | * v5.0 demo released | ||
+ | :* cut the dict and use new segment-tool | ||
+ | |||
+ | ===Sparse NN in NLP=== | ||
+ | * prepare the ACL | ||
+ | :* check the code to find the problem . | ||
+ | :* increase the dimension | ||
+ | :* use different test set,but the result is not good. | ||
+ | |||
+ | ===online learning=== | ||
+ | * data is ready.prepare the ACL paper | ||
+ | :* prepare sougouQ data and test it using current online learning method | ||
+ | :* baseline is not normal. |
2015年4月8日 (三) 07:27的版本
Speech Processing
AM development
Environment
- grid-11 often shut down automatically, too slow computation speed.
RNN AM
- details at http://liuc.cslt.org/pages/rnnam.html
- tuning parameters on monophone NN
- run using wsj,MPE
Mic-Array
- investigate alpha parameter in time domian and frquency domain
- ALPHA>=0
Convolutive network
- HOLD
- CNN + DNN feature fusion
RNN-DAE(Deep based Auto-Encode-RNN)
- HOLD -Zhiyong
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261
Speaker ID
- DNN-based sid --Yiye
- Decode --Yiye
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327
Ivector based ASR
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340
- Ivector dimention is smaller, performance is better
- Augument to hidden layer is better than input layer
- train on wsj(testbase dev93+evl92)
Text Processing
tag LM
- similar word extension in FST
- check the formula using Bayes and experiment
RNN LM
- rnn
- code the character-lm using Theano
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
W2V based doc classification
- corpus ready
- learn some benchmark.
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
Sparse NN in NLP
- prepare the ACL
- check the code to find the problem .
- increase the dimension
- use different test set,but the result is not good.
online learning
- data is ready.prepare the ACL paper
- prepare sougouQ data and test it using current online learning method
- baseline is not normal.