“ASR:2015-03-16”版本间的差异
来自cslt Wiki
(→Text Processing) |
|||
第1行: | 第1行: | ||
+ | ==Speech Processing == | ||
+ | === AM development === | ||
+ | |||
+ | ==== Environment ==== | ||
+ | * grid-11 often shut down automatically, too slow computation speed. | ||
+ | * GPU has being repired.--Xuewei | ||
+ | |||
+ | ==== RNN AM==== | ||
+ | * details at http://liuc.cslt.org/pages/rnnam.html | ||
+ | * tuning parameters on monophone NN | ||
+ | |||
+ | ==== Mic-Array ==== | ||
+ | * reproduce environment for interspeech | ||
+ | * investigate alpha parameter in Lasso | ||
+ | |||
+ | ====Dropout & Maxout & rectifier ==== | ||
+ | * HOLD | ||
+ | * Need to solve the too small learning-rate problem | ||
+ | * 20h small scale sparse dnn with rectifier. --Mengyuan | ||
+ | * 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao | ||
+ | |||
+ | ====Convolutive network==== | ||
+ | * HOLD | ||
+ | :* CNN + DNN feature fusion | ||
+ | :* reproduce experiments -- Yiye | ||
+ | |||
+ | ====RNN-DAE(Deep based Auto-Encode-RNN)==== | ||
+ | * HOLD -Zhiyong | ||
+ | * http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261 | ||
+ | |||
+ | ====Speech rate training==== | ||
+ | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268 | ||
+ | :* Technical report HOLD.-- Xiangyu Zeng, Shi Yin | ||
+ | :* Paper for NCMMSC done | ||
+ | |||
+ | ====Neural network visulization==== | ||
+ | * http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324 | ||
+ | * Technical report done --Mian Wang. | ||
+ | |||
+ | ===Speaker ID=== | ||
+ | :* DNN-based sid --Yiye | ||
+ | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327 | ||
+ | |||
+ | ===Ivector based ASR=== | ||
+ | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340 | ||
+ | :* Ivector dimention is smaller, performance is better | ||
+ | :* Augument to hidden layer is better than input layer | ||
+ | |||
+ | |||
==Text Processing== | ==Text Processing== | ||
===LM development=== | ===LM development=== |
2015年3月18日 (三) 07:27的最后版本
Speech Processing
AM development
Environment
- grid-11 often shut down automatically, too slow computation speed.
- GPU has being repired.--Xuewei
RNN AM
- details at http://liuc.cslt.org/pages/rnnam.html
- tuning parameters on monophone NN
Mic-Array
- reproduce environment for interspeech
- investigate alpha parameter in Lasso
Dropout & Maxout & rectifier
- HOLD
- Need to solve the too small learning-rate problem
- 20h small scale sparse dnn with rectifier. --Mengyuan
- 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao
Convolutive network
- HOLD
- CNN + DNN feature fusion
- reproduce experiments -- Yiye
RNN-DAE(Deep based Auto-Encode-RNN)
- HOLD -Zhiyong
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261
Speech rate training
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268
- Technical report HOLD.-- Xiangyu Zeng, Shi Yin
- Paper for NCMMSC done
Neural network visulization
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324
- Technical report done --Mian Wang.
Speaker ID
Ivector based ASR
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340
- Ivector dimention is smaller, performance is better
- Augument to hidden layer is better than input layer
Text Processing
LM development
Domain specific LM
- LM2.X
- train a large lm using 25w-dict.(hanzhenglong/wxx)
- v2.0c filter the useless word.(next week)
- set the test set for new word (hold)
- prepare the wiki data: entity list.
tag LM
- Tag Lm(JT)
- error check
- similar word extension in FST
- repeat the experiment using same data
RNN LM
- rnn
- the input and output is word embedding and add some token information like NER..
- map the word to character and train the lm.
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
Word2Vector
W2V based doc classification
- data prepare.(hold)
Knowledge vector
- make a report on Monday
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
Sparse NN in NLP
- prepare the ACL
- check the code to find the problem .
- increase the dimension
- use different test set.
QA
improve fuzzy match
- add Synonyms similarity using MERT-4 method(hold)
online learning
- data is ready.prepare the ACL paper
- prepare sougouQ data and test it using current online learning method
framework
- extract the module
- extract the context module ,search module,entity recognize module and common module.
- define the inference in different modules
- composite module
leftover problem
- new inter will install SEMPRE