“ASR Status Report 2017-1-3”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第10行: 第10行:
 
|Jingyi Lin
 
|Jingyi Lin
 
||  
 
||  
* Learn and make Dr.Wang's personal web page.
+
*  
* Prepare for the CSLT's Annual Meeting.
+
 
||  
 
||  
* Finish Dr.Wang's personal web page.
+
*  
* Take photos for menmbers in CSLT.
+
 
|-
 
|-
  
第21行: 第19行:
 
|Yanqing Wang
 
|Yanqing Wang
 
||  
 
||  
* implement the detection mechanism by socket
+
*  
* find best parameters to avoid over-fitting
+
* add two-class-SVM to the program
+
* make GUI more pretty and easy to use
+
* improve the program's robustness
+
* screenshot:
+
** [[媒体文件:dataSender.png|data sender]]
+
**[[媒体文件:dataAnalyser.png|data analyzer]]
+
 
||  
 
||  
* write a document on the program
+
*  
 
|-
 
|-
  
第39行: 第30行:
 
|Hang Luo
 
|Hang Luo
 
||  
 
||  
* Run joint training and write systemic script and documents
+
*  
 
||  
 
||  
* Finish joint training documents
+
*  
* Conclude joint training experiments result
+
* Make a review on mixlingual
+
 
|-
 
|-
  
第50行: 第39行:
 
|Ying Shi   
 
|Ying Shi   
 
||  
 
||  
* crawl corpus from internet.(I don't know whether the corpus is right or not.......)
+
*  
* make new LM(complete)
+
* train new AM(complete)
+
* a part of TRP
+
 
||  
 
||  
* finish the TRP
+
*  
 
|-
 
|-
  
第63行: 第49行:
 
|Yixiang Chen   
 
|Yixiang Chen   
 
||  
 
||  
* Prepare the input of speech data (trick of block segmentation)
+
*  
* Complete the init version on max-margin SRE.
+
* Write TRP-20160012 "基于Kaldi i-vector的说话人识别系统使用说明".
+
 
||  
 
||  
* Prepare the thesis proposal.
+
*  
* Integrate CNN + max-margin.
+
 
|-
 
|-
  
第75行: 第58行:
 
|Lantian Li   
 
|Lantian Li   
 
||  
 
||  
* Deep speaker embedding
+
*  
** Prepare two datasets and make the i-vector baselines.
+
* Write TRP-20160012 "基于Kaldi i-vector的说话人识别系统使用说明".
+
* Write book of robustness SRE.
+
* Wechat open account.
+
 
||  
 
||  
* Deep speaker embedding.
+
*  
* Write book.
+
* Replay detection on INTERSPEECH chanllenge.
+
 
|-
 
|-
  

2017年1月3日 (二) 06:25的版本


Date People Last Week This Week
2016.12.26


Jingyi Lin
Yanqing Wang
Hang Luo
Ying Shi
Yixiang Chen
Lantian Li
Zhiyuan Tang
  • TRP of "How to Config Kaldi nnet3 (in Chinese)", describe the usage of 34 components;
  • additional notes on "Multi-task Recurrent Model for True Multilingual Speech Recognition";
  • Generative models, part of Chapter Deep Learning.
  • check and submit the above 3 writings.





Date People Last Week This Week
2016.12.26


Jingyi Lin
  • Learn and make Dr.Wang's personal web page.
  • Prepare for the CSLT's Annual Meeting.
  • Finish Dr.Wang's personal web page.
  • Take photos for menmbers in CSLT.
Yanqing Wang
  • implement the detection mechanism by socket
  • find best parameters to avoid over-fitting
  • add two-class-SVM to the program
  • make GUI more pretty and easy to use
  • improve the program's robustness
  • screenshot:
  • write a document on the program
Hang Luo
  • Run joint training and write systemic script and documents
  • Finish joint training documents
  • Conclude joint training experiments result
  • Make a review on mixlingual
Ying Shi
  • crawl corpus from internet.(I don't know whether the corpus is right or not.......)
  • make new LM(complete)
  • train new AM(complete)
  • a part of TRP
  • finish the TRP
Yixiang Chen
  • Prepare the input of speech data (trick of block segmentation)
  • Complete the init version on max-margin SRE.
  • Write TRP-20160012 "基于Kaldi i-vector的说话人识别系统使用说明".
  • Prepare the thesis proposal.
  • Integrate CNN + max-margin.
Lantian Li
  • Deep speaker embedding
    • Prepare two datasets and make the i-vector baselines.
  • Write TRP-20160012 "基于Kaldi i-vector的说话人识别系统使用说明".
  • Write book of robustness SRE.
  • Wechat open account.
  • Deep speaker embedding.
  • Write book.
  • Replay detection on INTERSPEECH chanllenge.
Zhiyuan Tang
  • TRP of "How to Config Kaldi nnet3 (in Chinese)", not finished yet;
  • outline of TRP for "Multi-task Recurrent Model for True Multilingual Speech Recognition";
  • Generative models, part of Chapter Deep Learning.
  • Finish the above 3 writings.