“NLP Status Report 2016-12-05”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第20行: 第20行:
 
*restructured code
 
*restructured code
 
*found the cause of cost randomness
 
*found the cause of cost randomness
*modified memory weight,ran expriment
+
*modified memory weight,ran expriment [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c9/Yanqing-weight%282.0%29.pdf 言情风格][http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c9/Yanqing-weight%282.0%29.pdf 边塞风格]
 
*read a paper
 
*read a paper
 
*simply expain my code to Miss Feng
 
*simply expain my code to Miss Feng

2016年12月5日 (一) 05:31的版本

Date People Last Week This Week
2016/12/05 Yang Feng
  • rnng+MN: got the result of k-means method and the result is slightly worse;
  • fixed the bug;
  • analyzed the memory units and changed the calculation of similarity and reran.
  • S2S+MN: read the code and discuss with andy about the implementation details;
  • checked Wikianswers data and found the answers are usually much longer than the question;
  • read 12 QA-related papers in proceedings of ACL16 and EMNLP16 and haven't found proper dataset yet.
  • Huilan's work: got a version of better result focusing on syntactical transformation.
  • rnng+MN: get the result with new similarity calculation.
  • S2S+MN: revise the code of tensorflow to make it equivalent to theano's.
  • poetry: review the code of Jiyuan
  • Huilan's work: continue the work of adding syntactic information.
Jiyuan Zhang
  • restructured code
  • found the cause of cost randomness
  • modified memory weight,ran expriment 言情风格边塞风格
  • read a paper
  • simply expain my code to Miss Feng
  • discussed with liantian about the way of using tensorflow to realize his idea
  • improve poem model
Andi Zhang
  • deal with zh2en data set and ran them on NTM
  • had a small breakthrough about the code
  • get output of encoder to form memory
  • continue on the coding work of seq2seq with MemN2N
Shiyue Zhang
  • found a bug in my code and modified it.
  • tried memory with gate and found a big problem of memory.
  • reran previous models, the results are not better than baseline. [report]
  • reran the original model setting same seed, and got exactly same result.
  • published a TRP [1]
  • try to solve the problem of mem
Guli
  • busy on nothing for the first two days of the week.
  • modify the code and run NMT on fr-en data set
  • modify the code and run NMT on ch-uy data set
  • writing a survey about Chinese-uyghur MT