“ASR:2015-05-11”版本间的差异
来自cslt Wiki
(→Sparse code in NLP) |
(→RNN LM) |
||
(相同用户的6个中间修订版本未显示) | |||
第49行: | 第49行: | ||
==Text Processing== | ==Text Processing== | ||
====RNN LM==== | ====RNN LM==== | ||
− | * | + | *character-lm rnn(hold) |
− | + | ||
*lstm+rnn | *lstm+rnn | ||
:* check the lstm-rnnlm code about how to Initialize and update learning rate.(hold) | :* check the lstm-rnnlm code about how to Initialize and update learning rate.(hold) | ||
第59行: | 第58行: | ||
===Translation=== | ===Translation=== | ||
* similar-pair method in English word using translation model. | * similar-pair method in English word using translation model. | ||
+ | :* result:wer:70%-50% on top1. | ||
+ | :* change the AM model | ||
===Order representation === | ===Order representation === | ||
* modify the objective function | * modify the objective function | ||
* sup-sampling method to solve the low frequence word | * sup-sampling method to solve the low frequence word | ||
− | + | ===binary vector=== | |
− | === | + | ===Stochastic ListNet=== |
− | * using sampling method | + | * using sampling method and test |
===relation classifier=== | ===relation classifier=== | ||
− | * | + | * test the bidirectional neural network(B-RNN) and get a little improvement |
− | * | + | |
+ | ===plan to do=== | ||
+ | * combine LDA with neural network |
2015年5月18日 (一) 01:24的最后版本
Speech Processing
AM development
Environment
- grid-15 often does not work
RNN AM
- details at http://liuc.cslt.org/pages/rnnam.html
- Test monophone on RNN using dark-knowledge --Chao Liu
- run using wsj,MPE --Chao Liu
- run bi-directon --Chao Liu
- modify code --Zhiyuan
Mic-Array
- Change the prediction from fbank to spectrum features
- investigate alpha parameter in time domian and frquency domain
- ALPHA>=0, using data generated by reverber toolkit
- consider theta
- compute EER with kaldi
RNN-DAE(Deep based Auto-Encode-RNN)
- HOLD --Zhiyong Zhang
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261
Speaker ID
Ivector&Dvector based ASR
- hold --Tian Lan
- Cluster the speakers to speaker-classes, then using the distance or the posterior-probability as the metric
- Direct using the dark-knowledge strategy to do the ivector training.
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?step=view_request&cvssid=340
- Ivector dimention is smaller, performance is better
- Augument to hidden layer is better than input layer
- train on wsj(testbase dev93+evl92)
Dark knowledge
- Ensemble using 100h dataset to construct diffrernt structures -- Mengyuan
- adaptation for chinglish under investigation --Mengyuan Zhao
- Try to improve the chinglish performance extremly
- unsupervised training with wsj contributes to aurora4 model --Xiangyu Zeng
- test large database with AMIDA
bilingual recognition
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=359 --Zhiyuan Tang and Mengyuan
Text Processing
RNN LM
- character-lm rnn(hold)
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
W2V based document classification
- make a technical report about document classification using CNN --yiqiao
- CNN adapt to resolve the low resource problem
Translation
- similar-pair method in English word using translation model.
- result:wer:70%-50% on top1.
- change the AM model
Order representation
- modify the objective function
- sup-sampling method to solve the low frequence word
binary vector
Stochastic ListNet
- using sampling method and test
relation classifier
- test the bidirectional neural network(B-RNN) and get a little improvement
plan to do
- combine LDA with neural network