“NLP Status Report 2016-12-05”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(5位用户的10个中间修订版本未显示)
第4行: 第4行:
 
| rowspan="5"|2016/12/05
 
| rowspan="5"|2016/12/05
 
|Yang Feng ||
 
|Yang Feng ||
*[[rnng+MN:]] got the result of k-means method and the result is slightly worse;
+
*[[S2S+MN:]] survey another 6 papers (18 papers in total) for data set selection;
*fixed the bug;  
+
*read the code with tensorflow deeply;  
*analyzed the memory units and changed the calculation of similarity and reran.
+
*wrote the mannual for the tensorflow version of s2s [undergoing];
*[[S2S+MN:]] read the code and discuss with andy about the implementation details;
+
*[[PoemGen]] discuss the implementation of Jiyuan;
*checked Wikianswers data and found the answers are usually much longer than the question;  
+
*read Jiyuan's code;
*read 12 QA-related papers in proceedings of ACL16 and EMNLP16 and haven't found proper dataset yet.
+
*[[Huilan's work:]] get the result of integrate syntactic information.
*[[Huilan's work:]] got a version of better result focusing on syntactical transformation.
+
*[[Other]]: intern interview.
 
||
 
||
*[[rnng+MN:]] get the result with new similarity calculation.
+
*[[rnng+MN:]] do more attempt.
*[[S2S+MN:]] revise the code of tensorflow to make it equivalent to theano's.
+
*[[S2S+MN:]] complete the manual;
*[[poetry:]] review the code of Jiyuan
+
*add MN;
*[[Huilan's work:]] continue the work of adding syntactic information.  
+
*[[poetry:]] experiment design;
 +
*[[Huilan's work:]] come out with a system to submit.  
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||
 
*restructured code
 
*restructured code
 
*found the cause of cost randomness
 
*found the cause of cost randomness
*modified memory weight,ran expriment [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c9/Yanqing-weight%282.0%29.pdf 言情风格][http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e4/Biansaishi-weight%282.0%29.pdf 边塞风格]
+
*modified memory weight,ran expriment [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c9/Yanqing-weight%282.0%29.pdf 言情风格][http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e4/Biansaishi-weight%282.0%29.pdf 边塞风格][http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/ec/Atten-model.pdf 无风格]
*read a paper
+
*read a paper[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/1e/1610.09889v1.pdf]
*simply expain my code to Miss Feng
+
*simply explain my code to Professor Feng
 
*discussed with liantian about the way of using tensorflow to realize his idea
 
*discussed with liantian about the way of using tensorflow to realize his idea
 
||  
 
||  
第28行: 第29行:
 
|-
 
|-
 
|Andi Zhang ||
 
|Andi Zhang ||
*deal with zh2en data set and ran them on NTM
+
*turn to focus on tensorflow codes
*had a small breakthrough about the code
+
*finish code outputing encoder outputs but have some problems with format
 
+
 
||
 
||
*get output of encoder to form memory
+
*finish the part mentioned left
*continue on the coding work of seq2seq with MemN2N
+
 
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
* found a bug in my code and modified it.
+
* draw tsne pictures  [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]]
* tried memory with gate and found a big problem of memory.
+
* try a trained gate to switch between rnng and mem, which got a slightly better result.  
* reran previous models, the results are not better than baseline. [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]]
+
* reran the original model setting same seed, and got exactly same result.
+
* published a TRP [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Publication-trp]
+
 
||
 
||
* try to solve the problem of mem
+
*  
 
|-
 
|-
 
|Guli ||
 
|Guli ||
* busy on nothing for the first two days of the week.
+
* preparing more data set
* modify the code and run NMT on fr-en data set
+
* working on add translate module to code  
* modify the code and run NMT on ch-uy data set
+
* writing a survey about Chinese-uyghur MT
* writing a survey about Chinese-uyghur MT
+
 
||
 
||
 +
* prepare more data
 +
* prepare for Thesis Report
 
|}
 
|}

2016年12月5日 (一) 07:53的最后版本

Date People Last Week This Week
2016/12/05 Yang Feng
  • S2S+MN: survey another 6 papers (18 papers in total) for data set selection;
  • read the code with tensorflow deeply;
  • wrote the mannual for the tensorflow version of s2s [undergoing];
  • PoemGen discuss the implementation of Jiyuan;
  • read Jiyuan's code;
  • Huilan's work: get the result of integrate syntactic information.
  • Other: intern interview.
Jiyuan Zhang
  • restructured code
  • found the cause of cost randomness
  • modified memory weight,ran expriment 言情风格边塞风格无风格
  • read a paper[1]
  • simply explain my code to Professor Feng
  • discussed with liantian about the way of using tensorflow to realize his idea
  • improve poem model
Andi Zhang
  • turn to focus on tensorflow codes
  • finish code outputing encoder outputs but have some problems with format
  • finish the part mentioned left
Shiyue Zhang
  • draw tsne pictures [report]
  • try a trained gate to switch between rnng and mem, which got a slightly better result.
Guli
  • preparing more data set
  • working on add translate module to code
  • writing a survey about Chinese-uyghur MT
  • prepare more data
  • prepare for Thesis Report