“NLP Status Report 2016-11-21”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第4行: 第4行:
 
| rowspan="5"|2016/11/21
 
| rowspan="5"|2016/11/21
 
|Yang Feng ||
 
|Yang Feng ||
* rnng+mn
+
  rnng+mn
*ran experiments of rnng+mn [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]]  
+
*ran experiments of rnng+mn [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]]  
*used top-k for memory, under training
+
*used top-k for memory, under training
 
* wrote a proposal for sequence-to-sequence+mn
 
* wrote a proposal for sequence-to-sequence+mn
 
||
 
||

2016年11月21日 (一) 01:09的版本

Date People Last Week This Week
2016/11/21 Yang Feng
 rnng+mn
  • ran experiments of rnng+mn [report]
  • used top-k for memory, under training
  • wrote a proposal for sequence-to-sequence+mn
Jiyuan Zhang
Andi Zhang
  • prepare new data set for paraphrase, wiped out repetition & most of the noises
  • run NMT on fr-en data set and new paraphrase set
  • read through source code to find ways to modify it
  • helped Guli with running NMT on our server
  • decide to drop theano or not
  • start to work on codes
Shiyue Zhang
  • run rnng on MKL successfully, which can double or triple the speed.
  • rerun the original model and get the final result 92.32
  • rerun the wrong memory models, still running
  • implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline
  • try another structure of memory
  • try more different models and summary the results
  • publish the technical reports
Guli