“NLP Status Report 2016-11-14”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(4位用户的6个中间修订版本未显示)
第7行: 第7行:
 
* ran experiments for rnng+mn [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]]
 
* ran experiments for rnng+mn [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]]
 
* read the code of sequence-to-sequence with tensorflow
 
* read the code of sequence-to-sequence with tensorflow
* recruit interns
+
* recruited interns
 
* Huilan work summary
 
* Huilan work summary
 
||
 
||
第17行: 第17行:
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||
 
+
*checked previous code about encoder-memory
 +
*completed code about decoder-memory,running
 
||  
 
||  
 
+
*continue to modify memory model
 
|-
 
|-
 
|Andi Zhang ||
 
|Andi Zhang ||
 
+
*ran NMT (cs-en) on gpu, but bleu is low, could be resulting from a small corpus
 +
*ran NMT on paraphrase data set
 +
*wrote MemN2N ducument
  
 
||
 
||
 
+
*run NMT (fr-en) to get a bleu as that in the paper
 +
*run paraphrase for validation
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
 
+
* try rnng on GPU
 +
* read the code of Feng
 +
* modify model [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]
 
||
 
||
 
+
* try MKL
 +
* modify model
 
|-
 
|-
 
|}
 
|}

2016年11月15日 (二) 00:57的最后版本

Date People Last Week This Week
2016/10/31 Yang Feng
  • added new features to rnng+mn, including automatically detecting wrong sentences, swapping memories more frequently and filtering memory units to speed up.
  • ran experiments for rnng+mn [report]
  • read the code of sequence-to-sequence with tensorflow
  • recruited interns
  • Huilan work summary
  • optimize rnng+MN;
  • discuss the code with Jiyuan;
  • work with Andy at NMT;
  • Intern interviews
  • Huilan work.
Jiyuan Zhang
  • checked previous code about encoder-memory
  • completed code about decoder-memory,running
  • continue to modify memory model
Andi Zhang
  • ran NMT (cs-en) on gpu, but bleu is low, could be resulting from a small corpus
  • ran NMT on paraphrase data set
  • wrote MemN2N ducument
  • run NMT (fr-en) to get a bleu as that in the paper
  • run paraphrase for validation
Shiyue Zhang
  • try rnng on GPU
  • read the code of Feng
  • modify model report
  • try MKL
  • modify model