“ASR:2015-01-26”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“==Speech Processing == === AM development === ==== Environment ==== * May gpu760 of grid-14 be something wrong. To be exchanged. * grid-11 often shutdown automatica...”为内容创建页面)
 
 
(某位用户的一个中间修订版本未显示)
第3行: 第3行:
  
 
==== Environment ====
 
==== Environment ====
* May gpu760 of grid-14 be something wrong. To be exchanged.
+
* May gpu760 of grid-14 has been repairing.
* grid-11 often shutdown automatically
+
* grid-11 often shutdown automatically, too slow computation speed.
* grid-2/grid-10 have replaced the CPU fan.
+
* Add one hard disk to cuda.q machines.
+
 
+
==== Sparse DNN ====
+
* details at http://liuc.cslt.org/pages/sparse.html
+
  
 
==== RNN AM====
 
==== RNN AM====
* Trying toolkit of Microsoft.(+)
 
 
* details at http://liuc.cslt.org/pages/rnnam.html
 
* details at http://liuc.cslt.org/pages/rnnam.html
  
====Dropout & Maxout & retifier ====
+
====Dropout & Maxout & rectifier ====
* Drop out
+
* Need to solve the too small learning-rate problem
 
+
* 20h small scale sparse dnn with rectifier. --Chao liu
* MaxOut && P-norm(+)
+
* 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao
:* Need to solve the too small learning-rate problem
+
:** Add one normalization layer after the pnorm-layer
+
:** Add L2-norm upper bound
+
:* hold
+
  
 
====Convolutive network====
 
====Convolutive network====
 
* Convolutive network(DAE)
 
* Convolutive network(DAE)
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
:* To test real environment echo.(+)
+
:* Technical report to draft, Mian Wang, Yiye Lin, Shi Yin, Mengyuan Zhao
  
:* Feature extractor
+
====DNN-DAE(Deep Auto-Encode-DNN)====
:** Technical report to draft, Yiye Lin, Shi Yin, Menyuan Zhao and Mian Wang
+
* Technical report to draft, Xiangyu Zeng, Shi Yin, Mengyuan Zhao and Zhiyong Zhang,
 +
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318
  
====DNN-DAE(Deep Atuo-Encode-DNN)====
+
====RNN-DAE(Deep based Auto-Encode-RNN)====
:* Technical report to draft, Mengyuan Zhao and Zhiyong Zhang.
+
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318
+
* HOLD
:* XueWei will reproduce the experiments.
+
 
+
====RNN-DAE(Deep based Atuo-Encode-RNN)====
+
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261
+
:* HOLD
+
  
 
====VAD====
 
====VAD====
* Harmonics and Teager energy features.
+
* DAE
* MPE training
+
* Technical report --Shi Yin
* Test only Harmonic feature
+
  
 
====Speech rate training====
 
====Speech rate training====
第82行: 第67行:
 
:* mix the sougou2T-lm,kn-discount continue
 
:* mix the sougou2T-lm,kn-discount continue
 
:* train a large lm using 25w-dict.(hanzhenglong/wxx)
 
:* train a large lm using 25w-dict.(hanzhenglong/wxx)
::* find the problem in asr result
+
::* add more data including poi, document information.
::* the model will finish (Tuesday)
+
::* add v1.0 vocab and filter the useless word
 +
::* set the test set
  
 
====tag LM====
 
====tag LM====
 
* Tag Lm
 
* Tag Lm
 
:* tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong ("this month")
 
:* tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong ("this month")
:* run a tag demo('''this week''')
 
*paper
 
:* paper submit this week.
 
 
* similar word extension in FST
 
* similar word extension in FST
:* find similarity word using word2vec,word vector is training.
 
:* set the weight for word
 
:* set a proper test set
 
 
:* write a draft of a paper  
 
:* write a draft of a paper  
 
+
:* result :16.32->10.23
 
====RNN LM====
 
====RNN LM====
 
*rnn
 
*rnn
第110行: 第90行:
 
====Knowledge vector====
 
====Knowledge vector====
 
* Knowledge vector  
 
* Knowledge vector  
:* Make a proper test set.
+
:* run the big data
:* use text information and train word vector together.
+
:* prepare the paper.
:* Modify the object function and training process.
+
:* try to train on the whole data set
+
 
* result
 
* result
:* 0.745->0.79, using yago for training.
 
 
====Character to word====
 
====Character to word====
 
* Character to word conversion(hold)
 
* Character to word conversion(hold)
第124行: 第101行:
  
 
===Sparse NN in NLP===
 
===Sparse NN in NLP===
* review related paper
+
* write a technical report
  
 
===QA===
 
===QA===
第133行: 第110行:
 
::* POS, NER ,tf ,idf ..
 
::* POS, NER ,tf ,idf ..
 
:* extract more features about lexical, syntactic and semantic to improve re-ranking performance.
 
:* extract more features about lexical, syntactic and semantic to improve re-ranking performance.
 
+
:* using sentence vector
 
====context framework====
 
====context framework====
 
* code for organization
 
* code for organization

2015年1月30日 (五) 02:16的最后版本

Speech Processing

AM development

Environment

  • May gpu760 of grid-14 has been repairing.
  • grid-11 often shutdown automatically, too slow computation speed.

RNN AM

Dropout & Maxout & rectifier

  • Need to solve the too small learning-rate problem
  • 20h small scale sparse dnn with rectifier. --Chao liu
  • 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao

Convolutive network

  • Convolutive network(DAE)

DNN-DAE(Deep Auto-Encode-DNN)

RNN-DAE(Deep based Auto-Encode-RNN)

VAD

  • DAE
  • Technical report --Shi Yin

Speech rate training

Confidence

  • Reproduce the experiments on fisher dataset.
  • Use the fisher DNN model to decode all-wsj dataset
  • preparing scoring for puqiang data
  • HOLD

Neural network visulization

Speaker ID

Language ID

Voice Conversion

  • Yiye is reading materials
  • HOLD


Text Processing

LM development

Domain specific LM

  • LM2.1
  • mix the sougou2T-lm,kn-discount continue
  • train a large lm using 25w-dict.(hanzhenglong/wxx)
  • add more data including poi, document information.
  • add v1.0 vocab and filter the useless word
  • set the test set

tag LM

  • Tag Lm
  • tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong ("this month")
  • similar word extension in FST
  • write a draft of a paper
  • result :16.32->10.23

RNN LM

  • rnn
  • test wer RNNLM on Chinese data from jietong-data
  • generate the ngram model from rnnlm and test the ppl with different size txt.
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

Word2Vector

W2V based doc classification

  • data prepare.

Knowledge vector

  • Knowledge vector
  • run the big data
  • prepare the paper.
  • result

Character to word

  • Character to word conversion(hold)

Translation

  • v5.0 demo released
  • cut the dict and use new segment-tool

Sparse NN in NLP

  • write a technical report

QA

improve fuzzy match

  • add Synonyms similarity using MERT-4 method(hold)

improve lucene search

  • add more feature to improve search.
  • POS, NER ,tf ,idf ..
  • extract more features about lexical, syntactic and semantic to improve re-ranking performance.
  • using sentence vector

context framework

  • code for organization
  • change to knowledge graph

query normalization

  • using NER to normalize the word
  • new inter will install SEMPRE