“ASR:2015-03-02”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
Text Processing
Lr讨论 | 贡献
context framework
 
(2位用户的3个中间修订版本未显示)
第3行: 第3行:
  
 
==== Environment ====
 
==== Environment ====
* May gpu760 of grid-14 has been repairing.
 
 
* grid-11 often shutdown automatically, too slow computation speed.
 
* grid-11 often shutdown automatically, too slow computation speed.
 +
* buy a new 800W power -- Xuewei
  
 
==== RNN AM====
 
==== RNN AM====
 
* details at http://liuc.cslt.org/pages/rnnam.html
 
* details at http://liuc.cslt.org/pages/rnnam.html
 +
* triphone one state based RNN?
  
 
==== Mic-Array ====
 
==== Mic-Array ====
* XueWei is reading papers and preparing the technical report
+
* the technical report is done.
 +
* reproduce environment for interspeech
  
 
====Dropout & Maxout & rectifier ====
 
====Dropout & Maxout & rectifier ====
 +
* HOLD
 
* Need to solve the too small learning-rate problem
 
* Need to solve the too small learning-rate problem
 
* 20h small scale sparse dnn with rectifier. --Chao liu
 
* 20h small scale sparse dnn with rectifier. --Chao liu
 
* 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao
 
* 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao
* hold
 
  
 
====Convolutive network====
 
====Convolutive network====
 
* Convolutive network(DAE)
 
* Convolutive network(DAE)
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
:* Technical report to draft, Mian Wang, Yiye Lin, Shi Yin, Mengyuan Zhao
+
:* Technical report writing, Mian Wang, Yiye Lin, Shi Yin, Mengyuan Zhao
 +
:* reproduce experiments -- Yiye
  
 
====DNN-DAE(Deep Auto-Encode-DNN)====
 
====DNN-DAE(Deep Auto-Encode-DNN)====
 +
* HOLD
 
* Technical report to draft, Xiangyu Zeng, Shi Yin, Mengyuan Zhao and Zhiyong Zhang,  
 
* Technical report to draft, Xiangyu Zeng, Shi Yin, Mengyuan Zhao and Zhiyong Zhang,  
 
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318
 
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318
  
 
====RNN-DAE(Deep based Auto-Encode-RNN)====
 
====RNN-DAE(Deep based Auto-Encode-RNN)====
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261
 
 
* HOLD
 
* HOLD
 +
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261
  
 
====VAD====
 
====VAD====
 
* DAE
 
* DAE
:* HOLD
+
* Technical report done. -- Shi Yin
* Technical report -- Shi Yin
+
  
 
====Speech rate training====
 
====Speech rate training====
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268
 
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268
 
:* Technical report to draft. Xiangyu Zeng, Shi Yin
 
:* Technical report to draft. Xiangyu Zeng, Shi Yin
:* Prepare for ChinaSIP
+
:* Prepare for NCMSSC
  
 
====Confidence====
 
====Confidence====
 +
* HOLD
 
* Reproduce the experiments on fisher dataset.
 
* Reproduce the experiments on fisher dataset.
 
* Use the fisher DNN model to decode all-wsj dataset
 
* Use the fisher DNN model to decode all-wsj dataset
 
* preparing scoring for puqiang data
 
* preparing scoring for puqiang data
* HOLD
 
  
 
====Neural network visulization====
 
====Neural network visulization====
 
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324
 
* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324
* Technical report, Mian Wang.
+
* Technical report writing, Mian Wang.
  
 
===Speaker ID===   
 
===Speaker ID===   
第102行: 第105行:
 
===Sparse NN in NLP===
 
===Sparse NN in NLP===
 
* write a technical report(Wednesday) and make a report.
 
* write a technical report(Wednesday) and make a report.
* ready to prepare the ACL
+
* prepare the ACL
  
 
===QA===
 
===QA===
第115行: 第118行:
 
*a simple edition about online learning part about QA.
 
*a simple edition about online learning part about QA.
 
====context framework====
 
====context framework====
* code for organization
+
* code for demo
:* change to knowledge graph,and learn the D2R tool and JENA[[媒体文件:政府组织机构图谱--汇联.pdf| 政府组织推演]] [[媒体文件:员工信息推演_知识图谱.pdf|员工信息实例推演]]
+
:*
:* code for demo
+
 
::*
+
 
====query normalization====
 
====query normalization====
 
* using NER to normalize the word
 
* using NER to normalize the word
  
 
* new inter will install SEMPRE
 
* new inter will install SEMPRE

2015年3月9日 (一) 01:16的最后版本

Speech Processing

AM development

Environment

  • grid-11 often shutdown automatically, too slow computation speed.
  • buy a new 800W power -- Xuewei

RNN AM

Mic-Array

  • the technical report is done.
  • reproduce environment for interspeech

Dropout & Maxout & rectifier

  • HOLD
  • Need to solve the too small learning-rate problem
  • 20h small scale sparse dnn with rectifier. --Chao liu
  • 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao

Convolutive network

  • Convolutive network(DAE)

DNN-DAE(Deep Auto-Encode-DNN)

RNN-DAE(Deep based Auto-Encode-RNN)

VAD

  • DAE
  • Technical report done. -- Shi Yin

Speech rate training

Confidence

  • HOLD
  • Reproduce the experiments on fisher dataset.
  • Use the fisher DNN model to decode all-wsj dataset
  • preparing scoring for puqiang data

Neural network visulization

Speaker ID


Text Processing

LM development

Domain specific LM

  • LM2.X
  • mix the sougou2T-lm,kn-discount(done)
  • train a large lm using 25w-dict.(hanzhenglong/wxx)
  • v2.0a adjust the weight and smaller weight of transcription is better.(done)
  • v2.0b add the v1.0 vocab(done)
  • v2.0c filter the useless word.(next week)
  • set the test set for new word (hold)

tag LM

  • Tag Lm
  • add 3-class tag and test
  • similar word extension in FST
  • improve the key-word weight in G , and result is good in keyword recognization
  • read to deal with the English-Chinese

RNN LM

  • rnn
  • test wer RNNLM on Chinese data from jietong-data
  • generate the ngram model from rnnlm and test the ppl with different size txt.
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

Word2Vector

W2V based doc classification

  • data prepare.

Knowledge vector

  • paper is done, submitted ACL

Character to word

  • Character to word conversion(hold)

Word vector online learing

  • prepare the ACL

Translation

  • v5.0 demo released
  • cut the dict and use new segment-tool

Sparse NN in NLP

  • write a technical report(Wednesday) and make a report.
  • prepare the ACL

QA

improve fuzzy match

  • add Synonyms similarity using MERT-4 method(hold)

improve lucene search

  • commit to Rong Liu to check in.
  • online learning to rank
  • tool is ok and ready to test

online learning

  • a simple edition about online learning part about QA.

context framework

  • code for demo

query normalization

  • using NER to normalize the word
  • new inter will install SEMPRE