“2019-01-23”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(6位用户的9个中间修订版本未显示)
第10行: 第10行:
 
|Yibo Liu
 
|Yibo Liu
 
||  
 
||  
*  
+
* Started to reconstruct the vivi code with better structures.
 
||
 
||
*
+
* Especially need to build proper models for planning and post processing.
 
||
 
||
 
*   
 
*   
第52行: 第52行:
 
|Zhaodi Qi
 
|Zhaodi Qi
 
||  
 
||  
*  
+
* Reduce the lid model and test the results
 +
* Test test set of different channels
 +
* Wrote a model based on asr(tdnn-f)-lid(tdnn) (similar to PTN) to solve channel inconsistency
 
||  
 
||  
*  
+
* Complete the asr-lid model
 
||
 
||
 
*   
 
*   
第65行: 第67行:
 
|Jiawei Yu
 
|Jiawei Yu
 
||  
 
||  
*  
+
* wrote a tensorflow learning document, and I have not completed it.
 +
* read some papers about attention and find some attention code in GitHub.
 
||  
 
||  
*  
+
* try to run these attention code figure out mechanism of this code.
 
||
 
||
*
+
*  
 
|-
 
|-
  
第78行: 第81行:
 
*1,figured out how the Bert model create the pretraining data and do the pretraining.
 
*1,figured out how the Bert model create the pretraining data and do the pretraining.
 
*2,try to use the Bert to do the error correction of a text sentence.
 
*2,try to use the Bert to do the error correction of a text sentence.
 +
*3,re-label some ASR data
 +
*4,Test vivi2.0 model
 
||  
 
||  
*3,build a programm to
+
*Construct a Text sentence error correction model
 
||
 
||
 
*   
 
*   
第89行: 第94行:
 
||  
 
||  
 
*Do experiments on comparing test time and update it on cvss
 
*Do experiments on comparing test time and update it on cvss
 
+
*Read the experiment code carefully
 
||  
 
||  
*
+
*Directly decompose the trained parameters and put them into the network for retraining.
 
||
 
||
 
*   
 
*   
第101行: 第106行:
 
|-
 
|-
 
|Yang Zhang
 
|Yang Zhang
 +
||
 +
* 1. remodified nginx configuration and changed the server networking structure
 +
* 2. tried to learn vae and did a [https://github.com/hwalsuklee/tensorflow-mnist-VAE test] in wolf server
 +
||
 +
* continue to learn and test VAE
 +
||
 +
 +
|-
 +
 +
 +
|-
 +
|Wenwei Dong
 
||
 
||
 
*  
 
*  
第108行: 第125行:
 
*   
 
*   
 
|-
 
|-
 +
  
  
 
|}
 
|}

2019年1月24日 (四) 02:14的最后版本

People Last Week This Week Task Tracking (DeadLine)
Yibo Liu
  • Started to reconstruct the vivi code with better structures.
  • Especially need to build proper models for planning and post processing.
Xiuqi Jiang
  • Designed a better code structure for further experiments.
  • Improved vivi2.0 and made some adjustments to .sh script.
  • Build codes under the new structure.
Jiayao Wu
  • do experiments on node_sparseness and update it on cvss
  • re-label some data
  • keep on doing experiments on pruning
  • get familiar with pytorch
Zhaodi Qi
  • Reduce the lid model and test the results
  • Test test set of different channels
  • Wrote a model based on asr(tdnn-f)-lid(tdnn) (similar to PTN) to solve channel inconsistency
  • Complete the asr-lid model
Jiawei Yu
  • wrote a tensorflow learning document, and I have not completed it.
  • read some papers about attention and find some attention code in GitHub.
  • try to run these attention code figure out mechanism of this code.
Yunqi Cai
  • 1,figured out how the Bert model create the pretraining data and do the pretraining.
  • 2,try to use the Bert to do the error correction of a text sentence.
  • 3,re-label some ASR data
  • 4,Test vivi2.0 model
  • Construct a Text sentence error correction model
Dan He
  • Do experiments on comparing test time and update it on cvss
  • Read the experiment code carefully
  • Directly decompose the trained parameters and put them into the network for retraining.
Yang Zhang
  • 1. remodified nginx configuration and changed the server networking structure
  • 2. tried to learn vae and did a test in wolf server
  • continue to learn and test VAE
Wenwei Dong