“NLP Status Report 2016-11-28”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“{| class="wikitable" !Date !! People !! Last Week !! This Week |- | rowspan="5"|2016/11/21 |Yang Feng || || |- |Jiyuan Zhang || || |- |Andi Zhang || || |- |Shiyue Z...”为内容创建页面)
 
 
(5位用户的24个中间修订版本未显示)
第2行: 第2行:
 
!Date !! People !! Last Week !! This Week
 
!Date !! People !! Last Week !! This Week
 
|-
 
|-
| rowspan="5"|2016/11/21
+
| rowspan="5"|2016/11/28
 
|Yang Feng ||
 
|Yang Feng ||
 +
*[[rnng+MN:]] got the result of k-means method and the result is slightly worse;
 +
*fixed the bug;
 +
*analyzed the memory units and changed the calculation of similarity and reran.
 +
*[[S2S+MN:]] read the code and discuss with andy about the implementation details;
 +
*checked Wikianswers data and found the answers are usually much longer than the question;
 +
*read 12 QA-related papers in proceedings of ACL16 and EMNLP16 and haven't found proper dataset yet.
 +
*[[Huilan's work:]] got a version of better result focusing on syntactical transformation.
 
||
 
||
 +
*[[rnng+MN:]] get the result with new similarity calculation.
 +
*[[S2S+MN:]] revise the code of tensorflow to make it equivalent to theano's.
 +
*[[poetry:]] review the code of Jiyuan
 +
*[[Huilan's work:]] continue the work of adding syntactic information.
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||
 +
*read three related paper to find some ideas <br/> [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3e/1410.5401v2.pdf neural turing machine]<br/>[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/54/1508.06576v1.pdf A Neural Algorithm of Artistic Style]<br/>[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/83/Dedaff23ad393c48fe7b7989542318a02dc0a06e.pdf  Generating Long and Diverse Responses with Neural Conversation Models]
 +
 +
* polished TRP [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c4/Memory-atten-model-public.pdf]
 
||  
 
||  
 +
*improve poem model 
 
|-
 
|-
 
|Andi Zhang ||
 
|Andi Zhang ||
 +
*deal with zh2en data set and ran them on NTM
 +
*had a small breakthrough about the code
 +
 
||
 
||
 +
*get output of encoder to form memory
 +
*continue on the coding work of seq2seq with MemN2N
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
 +
* found a bug in my code and modified it.
 +
* tried memory with gate and found a big problem of memory.
 +
* reran previous models, the results are not better than baseline. [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]]
 +
* reran the original model setting same seed, and got exactly same result.
 +
* published a TRP [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Publication-trp]
 
||
 
||
 +
* try to solve the problem of mem
 
|-
 
|-
 
|Guli ||
 
|Guli ||
 +
* busy on nothing for the first two days of the week.
 +
* modify the code and run NMT on fr-en data set
 +
* modify the code and run NMT on ch-uy data set
 +
* writing a survey about Chinese-uyghur MT
 
||
 
||
 
|}
 
|}

2016年11月29日 (二) 09:22的最后版本

Date People Last Week This Week
2016/11/28 Yang Feng
  • rnng+MN: got the result of k-means method and the result is slightly worse;
  • fixed the bug;
  • analyzed the memory units and changed the calculation of similarity and reran.
  • S2S+MN: read the code and discuss with andy about the implementation details;
  • checked Wikianswers data and found the answers are usually much longer than the question;
  • read 12 QA-related papers in proceedings of ACL16 and EMNLP16 and haven't found proper dataset yet.
  • Huilan's work: got a version of better result focusing on syntactical transformation.
  • rnng+MN: get the result with new similarity calculation.
  • S2S+MN: revise the code of tensorflow to make it equivalent to theano's.
  • poetry: review the code of Jiyuan
  • Huilan's work: continue the work of adding syntactic information.
Jiyuan Zhang
  • polished TRP [1]
  • improve poem model
Andi Zhang
  • deal with zh2en data set and ran them on NTM
  • had a small breakthrough about the code
  • get output of encoder to form memory
  • continue on the coding work of seq2seq with MemN2N
Shiyue Zhang
  • found a bug in my code and modified it.
  • tried memory with gate and found a big problem of memory.
  • reran previous models, the results are not better than baseline. [report]
  • reran the original model setting same seed, and got exactly same result.
  • published a TRP [2]
  • try to solve the problem of mem
Guli
  • busy on nothing for the first two days of the week.
  • modify the code and run NMT on fr-en data set
  • modify the code and run NMT on ch-uy data set
  • writing a survey about Chinese-uyghur MT