“NLP Status Report 2017-4-10”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“{| class="wikitable" !Date !! People !! Last Week !! This Week |- | rowspan="6"|2017/4/5 |Yang Feng || * Got the sampled 100w good data and ran Moses (BLEU: 30.6) *...”为内容创建页面)
 
 
第25行: 第25行:
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
* got a reasonable baseline on big zhen data
+
* working on a paper for EMNLP
 
||
 
||
* implement mem model on this baseline, and test on big data
+
* working on a paper for EMNLP
 
|-
 
|-
 
|Peilun Xiao ||
 
|Peilun Xiao ||

2017年5月3日 (三) 02:52的最后版本

Date People Last Week This Week
2017/4/5 Yang Feng
  • Got the sampled 100w good data and ran Moses (BLEU: 30.6)
  • Reimplemented the idea of ACL (added some optimization to the previous code) and check the performance in the following gradual steps: 1. use s_i-1 as memory query; 2. use s_i-1+c_i as memory query; 3. use y as the memory states for attention; 4. use y + smt_attentions * h as memory states for attention.
  • ran experiments for the above steps but the loss was inf. I am looking for reasons.
  • do experiments and write the paper
Jiyuan Zhang
  • convert the style of the paper to EMNLP
  • contact the ppg's author to get the code
  • improve the effect of the qx's model
Andi Zhang
  • revise the original oov model so that it can automatically detect oov words and translate them
  • deal with the situation that source word is oov but target word is not oov first
  • it didn't predict right
  • make the model work as what we wanted
  • deal with the situation that source word is oov and target word is also oov, then other situations
Shiyue Zhang
  • working on a paper for EMNLP
  • working on a paper for EMNLP
Peilun Xiao