“ASR:2015-04-13”版本间的差异
来自cslt Wiki
(以“==Speech Processing == === AM development === ==== Environment ==== * grid-11 often shut down automatically, too slow computation speed. ==== RNN AM==== * details...”为内容创建页面) |
(→Speech Processing) |
||
(2位用户的2个中间修订版本未显示) | |||
第4行: | 第4行: | ||
==== Environment ==== | ==== Environment ==== | ||
* grid-11 often shut down automatically, too slow computation speed. | * grid-11 often shut down automatically, too slow computation speed. | ||
− | + | * add a server(760) | |
==== RNN AM==== | ==== RNN AM==== | ||
第14行: | 第14行: | ||
==== Mic-Array ==== | ==== Mic-Array ==== | ||
* investigate alpha parameter in time domian and frquency domain | * investigate alpha parameter in time domian and frquency domain | ||
− | * ALPHA>=0 | + | * ALPHA>=0, using data generated by reverber toolkit |
+ | * consider theta | ||
====Convolutive network==== | ====Convolutive network==== | ||
* HOLD | * HOLD | ||
− | + | * CNN + DNN feature fusion | |
====RNN-DAE(Deep based Auto-Encode-RNN)==== | ====RNN-DAE(Deep based Auto-Encode-RNN)==== | ||
第28行: | 第29行: | ||
===Speaker ID=== | ===Speaker ID=== | ||
:* DNN-based sid --Yiye | :* DNN-based sid --Yiye | ||
− | |||
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327 | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327 | ||
===Ivector based ASR=== | ===Ivector based ASR=== | ||
+ | *hold | ||
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340 | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340 | ||
:* Ivector dimention is smaller, performance is better | :* Ivector dimention is smaller, performance is better | ||
:* Augument to hidden layer is better than input layer | :* Augument to hidden layer is better than input layer | ||
:* train on wsj(testbase dev93+evl92) | :* train on wsj(testbase dev93+evl92) | ||
+ | |||
+ | ===Dark knowledge=== | ||
+ | :*http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=264 --zhiyong | ||
+ | :* trial on logit matching faild --mengyuan | ||
+ | :* adaptation for chinglish under investigation-mengyuan | ||
+ | :* unsupervised training with wsj contributes to aurora4 model--xiangyu | ||
+ | :* test large database with amida--xiangyu | ||
+ | |||
+ | ===bilingual recognition=== | ||
+ | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=359--zhiyuan | ||
==Text Processing== | ==Text Processing== | ||
===tag LM=== | ===tag LM=== | ||
* similar word extension in FST | * similar word extension in FST | ||
− | :* check the formula using Bayes and experiment | + | :* will check the formula using Bayes and experiment |
− | :* add more test data | + | :* fixed the bug using the big-lm. |
− | :* test the baseline(no weight) and different weight method | + | :* will add more test data |
+ | :* will test the baseline(no weight) and different weight method | ||
====RNN LM==== | ====RNN LM==== | ||
第50行: | 第62行: | ||
:* check the lstm-rnnlm code about how to Initialize and update learning rate.(hold) | :* check the lstm-rnnlm code about how to Initialize and update learning rate.(hold) | ||
− | ====W2V based | + | ====W2V based document classification==== |
− | * | + | * some result about VMF model [http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=lr&step=view_request&cvssid=353] |
− | + | * will try max-method | |
− | * | + | |
− | + | ||
===Translation=== | ===Translation=== | ||
* v5.0 demo released | * v5.0 demo released | ||
第60行: | 第70行: | ||
===Sparse NN in NLP=== | ===Sparse NN in NLP=== | ||
− | * | + | * test the drop-out model and the performance gets a little improvement, need some result: |
− | + | * test the order feature | |
− | + | ||
===online learning=== | ===online learning=== | ||
* data is ready.prepare the ACL paper | * data is ready.prepare the ACL paper | ||
+ | :* modified the listNet SGD | ||
:* finish some test. | :* finish some test. | ||
:* test the result on different time. | :* test the result on different time. | ||
===relation classifier=== | ===relation classifier=== | ||
− | * | + | * modified the drop-out method |
2015年4月15日 (三) 07:42的最后版本
Speech Processing
AM development
Environment
- grid-11 often shut down automatically, too slow computation speed.
- add a server(760)
RNN AM
- details at http://liuc.cslt.org/pages/rnnam.html
- tuning parameters on monophone NN
- run using wsj,MPE
Mic-Array
- investigate alpha parameter in time domian and frquency domain
- ALPHA>=0, using data generated by reverber toolkit
- consider theta
Convolutive network
- HOLD
- CNN + DNN feature fusion
RNN-DAE(Deep based Auto-Encode-RNN)
- HOLD -Zhiyong
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261
Speaker ID
Ivector based ASR
- hold
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340
- Ivector dimention is smaller, performance is better
- Augument to hidden layer is better than input layer
- train on wsj(testbase dev93+evl92)
Dark knowledge
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=264 --zhiyong
- trial on logit matching faild --mengyuan
- adaptation for chinglish under investigation-mengyuan
- unsupervised training with wsj contributes to aurora4 model--xiangyu
- test large database with amida--xiangyu
bilingual recognition
Text Processing
tag LM
- similar word extension in FST
- will check the formula using Bayes and experiment
- fixed the bug using the big-lm.
- will add more test data
- will test the baseline(no weight) and different weight method
RNN LM
- rnn
- code the character-lm using Theano
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
W2V based document classification
- some result about VMF model [1]
- will try max-method
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
Sparse NN in NLP
- test the drop-out model and the performance gets a little improvement, need some result:
- test the order feature
online learning
- data is ready.prepare the ACL paper
- modified the listNet SGD
- finish some test.
- test the result on different time.
relation classifier
- modified the drop-out method