“NLP Status Report 2017-4-5”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(2位用户的4个中间修订版本未显示)
第2行: 第2行:
 
!Date !! People !! Last Week !! This Week
 
!Date !! People !! Last Week !! This Week
 
|-
 
|-
| rowspan="6"|2017/3/27
+
| rowspan="6"|2017/4/5
 
|Yang Feng ||
 
|Yang Feng ||
*tested for the baseline but cannot get the reasonable result.
+
* Got the sampled 100w good data and ran Moses (BLEU: 30.6)
*debug the baseline to try to reproduce the good result but failed.
+
* Reimplemented the idea of ACL (added some optimization to the previous code) and check the performance in the following gradual steps: 1. use s_i-1 as memory query; 2. use s_i-1+c_i as memory query; 3. use y as the memory states for attention; 4. use y + smt_attentions * h as memory states for attention.
*fixed the problem of nan in alpha-gamma method but the result is not good.
+
* ran experiments for the above steps but the loss was inf. I am looking for reasons.
*changed the calculation of probability for alpha-gamma method but the result is neither good.
+
*ran Moses for cwmt zh-en translation, but the training data is case-sensitive, so need to rerun.
+
 
||
 
||
*rerun Moses for cwmt zh-en and cs-en
+
*do experiments and write the paper
*decide to use tensorflow or theano
+
*run experiments based on the chosen platform
+
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||
第22行: 第18行:
 
|-
 
|-
 
|Andi Zhang ||
 
|Andi Zhang ||
*fixed the bug, turns out it rises from the unfamiliarity with numpy.resize() function
+
*revise the original oov model so that it can automatically detect oov words and translate them
*the demo model can deal with oov problem(both source word and target word are oov)
+
*deal with the situation that source word is oov but target word is not oov first
 +
*it didn't predict right
 
||
 
||
*some paper work about graduation design
+
*make the model work as what we wanted
*run some experiments using theano on old data set and new zh2en from lihang
+
*deal with the situation that source word is oov and target word is also oov, then other situations
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  

2017年4月5日 (三) 02:16的最后版本

Date People Last Week This Week
2017/4/5 Yang Feng
  • Got the sampled 100w good data and ran Moses (BLEU: 30.6)
  • Reimplemented the idea of ACL (added some optimization to the previous code) and check the performance in the following gradual steps: 1. use s_i-1 as memory query; 2. use s_i-1+c_i as memory query; 3. use y as the memory states for attention; 4. use y + smt_attentions * h as memory states for attention.
  • ran experiments for the above steps but the loss was inf. I am looking for reasons.
  • do experiments and write the paper
Jiyuan Zhang
  • I did keyword expansion on the qx's model
  • fixed some bugs
  • read two papers
  • improve the effect of the qx's model
Andi Zhang
  • revise the original oov model so that it can automatically detect oov words and translate them
  • deal with the situation that source word is oov but target word is not oov first
  • it didn't predict right
  • make the model work as what we wanted
  • deal with the situation that source word is oov and target word is also oov, then other situations
Shiyue Zhang
  • got a reasonable baseline on big zhen data
  • implement mem model on this baseline, and test on big data
Peilun Xiao