“NLP Status Report 2016-11-14”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“{| class="wikitable" !Date !! People !! Last Week !! This Week |- | rowspan="4"|2016/10/31 |Yang Feng || * ran rnng+MN with offline lstm states, but failed, as the a...”为内容创建页面)
 
 
(4位用户的7个中间修订版本未显示)
第4行: 第4行:
 
| rowspan="4"|2016/10/31
 
| rowspan="4"|2016/10/31
 
|Yang Feng ||
 
|Yang Feng ||
* ran rnng+MN with offline lstm states, but failed, as the actions in oracle and lstm states conflicted.
+
* added new features to rnng+mn, including automatically detecting wrong sentences, swapping  memories more frequently and filtering memory units to speed up.
* wrote the code to generate lstm states online and ran some experiments. [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]]
+
* ran experiments for rnng+mn [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]]
* transferred the code to Shiyue.
+
* read the code of sequence-to-sequence with tensorflow
 +
* recruited interns
 +
* Huilan work summary
 
||
 
||
 
* optimize rnng+MN;
 
* optimize rnng+MN;
* do the work of sequence-to-sequence generation;
+
* discuss the code with Jiyuan;
 +
* work with Andy at NMT;
 +
* Intern interviews
 
*  Huilan work.
 
*  Huilan work.
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||
 
+
*checked previous code about encoder-memory
 +
*completed code about decoder-memory,running
 
||  
 
||  
 
+
*continue to modify memory model
 
|-
 
|-
 
|Andi Zhang ||
 
|Andi Zhang ||
 
+
*ran NMT (cs-en) on gpu, but bleu is low, could be resulting from a small corpus
 +
*ran NMT on paraphrase data set
 +
*wrote MemN2N ducument
  
 
||
 
||
 
+
*run NMT (fr-en) to get a bleu as that in the paper
 +
*run paraphrase for validation
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
 
+
* try rnng on GPU
 +
* read the code of Feng
 +
* modify model [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]
 
||
 
||
 
+
* try MKL
 +
* modify model
 
|-
 
|-
 
|}
 
|}

2016年11月15日 (二) 00:57的最后版本

Date People Last Week This Week
2016/10/31 Yang Feng
  • added new features to rnng+mn, including automatically detecting wrong sentences, swapping memories more frequently and filtering memory units to speed up.
  • ran experiments for rnng+mn [report]
  • read the code of sequence-to-sequence with tensorflow
  • recruited interns
  • Huilan work summary
  • optimize rnng+MN;
  • discuss the code with Jiyuan;
  • work with Andy at NMT;
  • Intern interviews
  • Huilan work.
Jiyuan Zhang
  • checked previous code about encoder-memory
  • completed code about decoder-memory,running
  • continue to modify memory model
Andi Zhang
  • ran NMT (cs-en) on gpu, but bleu is low, could be resulting from a small corpus
  • ran NMT on paraphrase data set
  • wrote MemN2N ducument
  • run NMT (fr-en) to get a bleu as that in the paper
  • run paraphrase for validation
Shiyue Zhang
  • try rnng on GPU
  • read the code of Feng
  • modify model report
  • try MKL
  • modify model