“ASR:2015-03-02”版本间的差异
来自cslt Wiki
(→Text Processing) |
(→context framework) |
||
(2位用户的3个中间修订版本未显示) | |||
第3行: | 第3行: | ||
==== Environment ==== | ==== Environment ==== | ||
− | |||
* grid-11 often shutdown automatically, too slow computation speed. | * grid-11 often shutdown automatically, too slow computation speed. | ||
+ | * buy a new 800W power -- Xuewei | ||
==== RNN AM==== | ==== RNN AM==== | ||
* details at http://liuc.cslt.org/pages/rnnam.html | * details at http://liuc.cslt.org/pages/rnnam.html | ||
+ | * triphone one state based RNN? | ||
==== Mic-Array ==== | ==== Mic-Array ==== | ||
− | * | + | * the technical report is done. |
+ | * reproduce environment for interspeech | ||
====Dropout & Maxout & rectifier ==== | ====Dropout & Maxout & rectifier ==== | ||
+ | * HOLD | ||
* Need to solve the too small learning-rate problem | * Need to solve the too small learning-rate problem | ||
* 20h small scale sparse dnn with rectifier. --Chao liu | * 20h small scale sparse dnn with rectifier. --Chao liu | ||
* 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao | * 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao | ||
− | |||
====Convolutive network==== | ====Convolutive network==== | ||
* Convolutive network(DAE) | * Convolutive network(DAE) | ||
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311 | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311 | ||
− | :* Technical report | + | :* Technical report writing, Mian Wang, Yiye Lin, Shi Yin, Mengyuan Zhao |
+ | :* reproduce experiments -- Yiye | ||
====DNN-DAE(Deep Auto-Encode-DNN)==== | ====DNN-DAE(Deep Auto-Encode-DNN)==== | ||
+ | * HOLD | ||
* Technical report to draft, Xiangyu Zeng, Shi Yin, Mengyuan Zhao and Zhiyong Zhang, | * Technical report to draft, Xiangyu Zeng, Shi Yin, Mengyuan Zhao and Zhiyong Zhang, | ||
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318 | * http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318 | ||
====RNN-DAE(Deep based Auto-Encode-RNN)==== | ====RNN-DAE(Deep based Auto-Encode-RNN)==== | ||
− | |||
* HOLD | * HOLD | ||
+ | * http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261 | ||
====VAD==== | ====VAD==== | ||
* DAE | * DAE | ||
− | + | * Technical report done. -- Shi Yin | |
− | * Technical report -- Shi Yin | + | |
====Speech rate training==== | ====Speech rate training==== | ||
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268 | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268 | ||
:* Technical report to draft. Xiangyu Zeng, Shi Yin | :* Technical report to draft. Xiangyu Zeng, Shi Yin | ||
− | :* Prepare for | + | :* Prepare for NCMSSC |
====Confidence==== | ====Confidence==== | ||
+ | * HOLD | ||
* Reproduce the experiments on fisher dataset. | * Reproduce the experiments on fisher dataset. | ||
* Use the fisher DNN model to decode all-wsj dataset | * Use the fisher DNN model to decode all-wsj dataset | ||
* preparing scoring for puqiang data | * preparing scoring for puqiang data | ||
− | |||
====Neural network visulization==== | ====Neural network visulization==== | ||
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324 | * http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324 | ||
− | * Technical report, Mian Wang. | + | * Technical report writing, Mian Wang. |
===Speaker ID=== | ===Speaker ID=== | ||
第102行: | 第105行: | ||
===Sparse NN in NLP=== | ===Sparse NN in NLP=== | ||
* write a technical report(Wednesday) and make a report. | * write a technical report(Wednesday) and make a report. | ||
− | * | + | * prepare the ACL |
===QA=== | ===QA=== | ||
第115行: | 第118行: | ||
*a simple edition about online learning part about QA. | *a simple edition about online learning part about QA. | ||
====context framework==== | ====context framework==== | ||
− | * code for | + | * code for demo |
− | :* | + | :* |
− | + | ||
− | + | ||
====query normalization==== | ====query normalization==== | ||
* using NER to normalize the word | * using NER to normalize the word | ||
* new inter will install SEMPRE | * new inter will install SEMPRE |
2015年3月9日 (一) 01:16的最后版本
目录
Speech Processing
AM development
Environment
- grid-11 often shutdown automatically, too slow computation speed.
- buy a new 800W power -- Xuewei
RNN AM
- details at http://liuc.cslt.org/pages/rnnam.html
- triphone one state based RNN?
Mic-Array
- the technical report is done.
- reproduce environment for interspeech
Dropout & Maxout & rectifier
- HOLD
- Need to solve the too small learning-rate problem
- 20h small scale sparse dnn with rectifier. --Chao liu
- 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao
Convolutive network
- Convolutive network(DAE)
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
- Technical report writing, Mian Wang, Yiye Lin, Shi Yin, Mengyuan Zhao
- reproduce experiments -- Yiye
DNN-DAE(Deep Auto-Encode-DNN)
- HOLD
- Technical report to draft, Xiangyu Zeng, Shi Yin, Mengyuan Zhao and Zhiyong Zhang,
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318
RNN-DAE(Deep based Auto-Encode-RNN)
VAD
- DAE
- Technical report done. -- Shi Yin
Speech rate training
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268
- Technical report to draft. Xiangyu Zeng, Shi Yin
- Prepare for NCMSSC
Confidence
- HOLD
- Reproduce the experiments on fisher dataset.
- Use the fisher DNN model to decode all-wsj dataset
- preparing scoring for puqiang data
Neural network visulization
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324
- Technical report writing, Mian Wang.
Speaker ID
Text Processing
LM development
Domain specific LM
- LM2.X
- mix the sougou2T-lm,kn-discount(done)
- train a large lm using 25w-dict.(hanzhenglong/wxx)
- v2.0a adjust the weight and smaller weight of transcription is better.(done)
- v2.0b add the v1.0 vocab(done)
- v2.0c filter the useless word.(next week)
- set the test set for new word (hold)
tag LM
- Tag Lm
- add 3-class tag and test
- similar word extension in FST
- improve the key-word weight in G , and result is good in keyword recognization
- read to deal with the English-Chinese
RNN LM
- rnn
- test wer RNNLM on Chinese data from jietong-data
- generate the ngram model from rnnlm and test the ppl with different size txt.
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
Word2Vector
W2V based doc classification
- data prepare.
Knowledge vector
- paper is done, submitted ACL
Character to word
- Character to word conversion(hold)
Word vector online learing
- prepare the ACL
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
Sparse NN in NLP
- write a technical report(Wednesday) and make a report.
- prepare the ACL
QA
improve fuzzy match
- add Synonyms similarity using MERT-4 method(hold)
improve lucene search
- commit to Rong Liu to check in.
- online learning to rank
- tool is ok and ready to test
online learning
- a simple edition about online learning part about QA.
context framework
- code for demo
query normalization
- using NER to normalize the word
- new inter will install SEMPRE