“NLP Status Report 2017-4-17”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“{| class="wikitable" !Date !! People !! Last Week !! This Week |- | rowspan="6"|2017/4/5 |Yang Feng || * Got the sampled 100w good data and ran Moses (BLEU: 30.6) *...”为内容创建页面)
 
第4行: 第4行:
 
| rowspan="6"|2017/4/5
 
| rowspan="6"|2017/4/5
 
|Yang Feng ||
 
|Yang Feng ||
* Got the sampled 100w good data and ran Moses (BLEU: 30.6)
+
 
* Reimplemented the idea of ACL (added some optimization to the previous code) and check the performance in the following gradual steps: 1. use s_i-1 as memory query; 2. use s_i-1+c_i as memory query; 3. use y as the memory states for attention; 4. use y + smt_attentions * h as memory states for attention.
+
* ran experiments for the above steps but the loss was inf. I am looking for reasons.
+
 
||
 
||
*do experiments and write the paper
 
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||
第14行: 第11行:
 
*check the emnlp paper
 
*check the emnlp paper
 
||  
 
||  
*improve the effect of the qx's model
 
 
|-
 
|-
 
|Andi Zhang ||
 
|Andi Zhang ||
*revise the original oov model so that it can automatically detect oov words and translate them
+
 
*deal with the situation that source word is oov but target word is not oov first
+
*it didn't predict right
+
 
||
 
||
*make the model work as what we wanted
 
*deal with the situation that source word is oov and target word is also oov, then other situations
 
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
* got a reasonable baseline on big zhen data
 
 
||
 
||
* implement mem model on this baseline, and test on big data
 
 
|-
 
|-
 
|Peilun Xiao ||
 
|Peilun Xiao ||

2017年5月3日 (三) 02:45的版本

Date People Last Week This Week
2017/4/5 Yang Feng
Jiyuan Zhang
  • run the ppg model using different datasets
  • check the emnlp paper
Andi Zhang
Shiyue Zhang
Peilun Xiao