“Asr-language-processing-research-s2s-generation”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
Time Table
Time Table
第10行: 第10行:
 
| 2016/11/07-2016/11/13 ||
 
| 2016/11/07-2016/11/13 ||
 
*successfully run the code of "Neural machine translation by jointly learning to align and translate" on gpu
 
*successfully run the code of "Neural machine translation by jointly learning to align and translate" on gpu
*start work on model_step_1: linear trans->cosine->linear trans->softmax; start coding if time permits.  
+
*start working on model_step_1: linear trans->cosine->linear trans->softmax; start coding if time permits.  
 
||
 
||
 
|-
 
|-
第19行: 第19行:
 
|-
 
|-
 
| 2016/11/21-2016/11/27 ||  
 
| 2016/11/21-2016/11/27 ||  
 
+
*continue working on model_step_1
 +
*start working on model_step_2: lstm->mn->softmax
 
||
 
||
 
|-
 
|-
| 2016/11/28-2016/12/04 || ||
+
| 2016/11/28-2016/12/04 ||  
 +
*coding & debug & run model_step_2
 +
*start working on model_step_3: joint training
 +
||
 
|-
 
|-
| 2016/12/05-2016/12/11 || ||
+
| 2016/12/05-2016/12/11 ||
 +
*coding & debug & run model_step_3
 +
 
 +
||
 
|-
 
|-
| 2016/12/12-2016/12/18 || ||
+
| 2016/12/12-2016/12/18 ||  
 +
*find ways to speed up the model if it is slow
 +
||
 
|-
 
|-
| 2016/12/19-2016/12/25 || ||
+
| 2016/12/19-2016/12/25 ||  
 +
*select memory to optimize result
 +
||
 
|-
 
|-
| 2016/12/26-2016/12/31 || ||
+
| 2016/12/26-2016/12/31 ||  
 +
*run final result
 +
*check any possible faults
 +
||
 
|-
 
|-
  

2016年11月8日 (二) 09:52的版本

Main Idea

People

Yang Feng, Andi Zhang

Time Table

Week Work Plan Work Done
2016/11/07-2016/11/13
  • successfully run the code of "Neural machine translation by jointly learning to align and translate" on gpu
  • start working on model_step_1: linear trans->cosine->linear trans->softmax; start coding if time permits.
2016/11/14-2016/11/20
  • coding on model_step_1
  • run & test the code
2016/11/21-2016/11/27
  • continue working on model_step_1
  • start working on model_step_2: lstm->mn->softmax
2016/11/28-2016/12/04
  • coding & debug & run model_step_2
  • start working on model_step_3: joint training
2016/12/05-2016/12/11
  • coding & debug & run model_step_3
2016/12/12-2016/12/18
  • find ways to speed up the model if it is slow
2016/12/19-2016/12/25
  • select memory to optimize result
2016/12/26-2016/12/31
  • run final result
  • check any possible faults

Progress