“NLP Status Report 2016-11-21”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(4位用户的13个中间修订版本未显示)
第4行: 第4行:
 
| rowspan="5"|2016/11/21
 
| rowspan="5"|2016/11/21
 
|Yang Feng ||
 
|Yang Feng ||
* rnng+mn
+
*rnng+mn  
*ran experiments of rnng+mn [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]]  
+
1) ran experiments of rnng+mn [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]] ;
*used top-k for memory, under training
+
2) used top-k for memory, under training
* wrote a proposal for sequence-to-sequence+mn
+
*sequence-to-sequence + mn
 +
1) wrote the proposal 
 +
2)  discussed the details with Andy
 +
*intern interview
 +
*Huilan's work
 
||
 
||
 
+
*rnng+mn
 +
1) get the result of top-k; 2) try bigger memory;
 +
*sequence-to-sequence + mn
 +
1)coding work
 +
*Huilan's work
 +
1) try syntax-based TM
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||
 
+
*ran decoder-memory model, but the effect is not obvious
 +
*changed binding way of memory and atten models, can generate different style of poetry
 +
*cleanned up my code
 +
*wrote a techreport about poemGen [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Atten-memory-poetry.pdf]
 +
*submitted two databases about poemGen and musicGen
 
||  
 
||  
 
+
*explored a variety of binding ways of memory and atten model
 
|-
 
|-
 
|Andi Zhang ||
 
|Andi Zhang ||
第26行: 第39行:
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
* run rnng on MKL successfully, which can double or triple the speed.
+
* run rnng on MKL successfully, which can double or triple the speed. Revised the RNNG User Guide.
 
* rerun the original model and get the final result 92.32
 
* rerun the original model and get the final result 92.32
 
* rerun the wrong memory models, still running
 
* rerun the wrong memory models, still running
 
* implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline  
 
* implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline  
* try another structure of memory
+
* try another structure of memory [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]]
 
||
 
||
 
* try more different models and summary the results
 
* try more different models and summary the results
第36行: 第49行:
 
|-
 
|-
 
|Guli ||
 
|Guli ||
 
+
* read the paper "NMT by jointly learning to align and translate"
 +
* read the codes of paper and ran NMT (cs-en) on GPU with the help of Andi
 +
* learn more about python
 +
* prepare data for Ontology Library
 
||
 
||
 
+
* continue to prepare the data
 +
* follow the teacher Yang's instructions
 
|}
 
|}

2016年11月21日 (一) 01:41的最后版本

Date People Last Week This Week
2016/11/21 Yang Feng
  • rnng+mn

1) ran experiments of rnng+mn [report] ; 2) used top-k for memory, under training

  • sequence-to-sequence + mn

1) wrote the proposal 2) discussed the details with Andy

  • intern interview
  • Huilan's work
  • rnng+mn

1) get the result of top-k; 2) try bigger memory;

  • sequence-to-sequence + mn

1)coding work

  • Huilan's work

1) try syntax-based TM

Jiyuan Zhang
  • ran decoder-memory model, but the effect is not obvious
  • changed binding way of memory and atten models, can generate different style of poetry
  • cleanned up my code
  • wrote a techreport about poemGen [1]
  • submitted two databases about poemGen and musicGen
  • explored a variety of binding ways of memory and atten model
Andi Zhang
  • prepare new data set for paraphrase, wiped out repetition & most of the noises
  • run NMT on fr-en data set and new paraphrase set
  • read through source code to find ways to modify it
  • helped Guli with running NMT on our server
  • decide to drop theano or not
  • start to work on codes
Shiyue Zhang
  • run rnng on MKL successfully, which can double or triple the speed. Revised the RNNG User Guide.
  • rerun the original model and get the final result 92.32
  • rerun the wrong memory models, still running
  • implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline
  • try another structure of memory [report]
  • try more different models and summary the results
  • publish the technical reports
Guli
  • read the paper "NMT by jointly learning to align and translate"
  • read the codes of paper and ran NMT (cs-en) on GPU with the help of Andi
  • learn more about python
  • prepare data for Ontology Library
  • continue to prepare the data
  • follow the teacher Yang's instructions