“ASR:2015-03-30”版本间的差异
来自cslt Wiki
(→LM development) |
|||
第49行: | 第49行: | ||
==Text Processing== | ==Text Processing== | ||
− | + | ===tag LM=== | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
* similar word extension in FST | * similar word extension in FST | ||
− | :* | + | :* check the formula using Bays and experiment |
− | + | ||
====RNN LM==== | ====RNN LM==== | ||
*rnn | *rnn | ||
− | :* the | + | :* code the character-lm using Theano |
− | + | ||
*lstm+rnn | *lstm+rnn | ||
:* check the lstm-rnnlm code about how to Initialize and update learning rate.(hold) | :* check the lstm-rnnlm code about how to Initialize and update learning rate.(hold) | ||
− | |||
− | |||
====W2V based doc classification==== | ====W2V based doc classification==== | ||
− | * data prepare. | + | * data prepare. |
− | + | * | |
− | * | + | |
− | + | ||
===Translation=== | ===Translation=== | ||
− | |||
* v5.0 demo released | * v5.0 demo released | ||
:* cut the dict and use new segment-tool | :* cut the dict and use new segment-tool | ||
第82行: | 第70行: | ||
:* check the code to find the problem . | :* check the code to find the problem . | ||
:* increase the dimension | :* increase the dimension | ||
− | :* use different test set. | + | :* use different test set,but the result is not good. |
− | + | ||
===online learning=== | ===online learning=== | ||
* data is ready.prepare the ACL paper | * data is ready.prepare the ACL paper | ||
:* prepare sougouQ data and test it using current online learning method | :* prepare sougouQ data and test it using current online learning method | ||
− | + | :* baseline is not normal. | |
− | * | + | |
− | + | ||
− | + | ||
− | + | ||
− | + |
2015年3月30日 (一) 05:34的版本
Speech Processing
AM development
Environment
- grid-11 often shut down automatically, too slow computation speed.
- GPU has being repired.--Xuewei
RNN AM
- details at http://liuc.cslt.org/pages/rnnam.html
- tuning parameters on monophone NN
Mic-Array
- investigate alpha parameter in time domian and frquency domain
Dropout & Maxout & rectifier
- HOLD
- Need to solve the too small learning-rate problem
- 20h small scale sparse dnn with rectifier. --Mengyuan
- 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao
Convolutive network
- HOLD
- CNN + DNN feature fusion
- reproduce experiments -- Yiye
RNN-DAE(Deep based Auto-Encode-RNN)
- HOLD -Zhiyong
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261
Speech rate training
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268
- Technical report HOLD.-- Xiangyu Zeng, Shi Yin
- Paper for NCMMSC done
Neural network visulization
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324
- Technical report done --Mian Wang.
Speaker ID
Ivector based ASR
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340
- Ivector dimention is smaller, performance is better
- Augument to hidden layer is better than input layer
- write paper for interspeech -- Xuewei
Text Processing
tag LM
- similar word extension in FST
- check the formula using Bays and experiment
RNN LM
- rnn
- code the character-lm using Theano
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
W2V based doc classification
- data prepare.
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
Sparse NN in NLP
- prepare the ACL
- check the code to find the problem .
- increase the dimension
- use different test set,but the result is not good.
online learning
- data is ready.prepare the ACL paper
- prepare sougouQ data and test it using current online learning method
- baseline is not normal.