“NLP Status Report 2017-1-3”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(4位用户的8个中间修订版本未显示)
第2行: 第2行:
 
!Date !! People !! Last Week !! This Week
 
!Date !! People !! Last Week !! This Week
 
|-
 
|-
| rowspan="6"|2016/12/26
+
| rowspan="6"|2017/1/3
 
|Yang Feng ||
 
|Yang Feng ||
 
*[[nmt+mn:]] tried to improve the nmt baseline;
 
*[[nmt+mn:]] tried to improve the nmt baseline;
 +
*met with problems for baseline, rulling out the factor of output order and file format and got the reason of learning rate.
 
*read the code of Andy's;
 
*read the code of Andy's;
 
*wrote the code for bleu evaluation;
 
*wrote the code for bleu evaluation;
*finished the code of nmt+mn;
+
*managed to fix the code of nmt+mn;
*ran experiments;
+
*ran experiments [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/50/Nmt_mn_report.pdf report]]
 
||
 
||
 
*[[nmt+mn:]] do further experiments.
 
*[[nmt+mn:]] do further experiments.
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||
*integrated tone_model to attention_model for insteading manul rule,but the effect wasn't good
+
*improved speed of prediction process
*replacing all_pz rule with half_pz
+
*ran expriments:<br/>
*token a classical Chinese as input,generated poem [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/33/Story_input.pdf]
+
two sytles expriments of top1_memory_model<br/>
 +
overfitting expriments of top1_memory_model<br/>
 +
two styles expriments of average_memory_model<br/>
 +
overfitting expirments of average_memory_model
 
||  
 
||  
 
*improve poem model   
 
*improve poem model   
第27行: 第31行:
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
* tried to add true action info when training gate, which got better results than no true actions, but still not very good.
+
* tried to improve rnng+mm model, but still failed  [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9f/RNNG%2Bmm_experiment_report.pdf report]]
* tried different scale vectors, and found setting >=-5000 is good
+
* stopped rnng work to help Teacher Feng with NMT
* tried to change cos to only inner product, and inner product is better than cos.
+
* ran theano NMT code successfully, and found a problem of test
* [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9f/RNNG%2Bmm_experiment_report.pdf report]]
+
* read 3 papers [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/92/DEEP_BIAFFINE_ATTENTION_FOR_NEURAL_DEPENDENCY_PARSING.pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/aa/Simple_and_Accurate_Dependency_Parsing_Using_Bidirectional_LSTM_Feature_Representations.pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/fb/Bi-directional_Attention_with_Agreement_for_Dependency_Parsing.pdf]]
+
* trying the joint training, which got a problem of optimization.
+
 
||
 
||
* try the joint training
+
* run and test the theano NMT model
* read more papers and write a summary
+
* try to modify tensorflow NMT model, run and test
 
|-
 
|-
 
|Guli ||
 
|Guli ||
* finished the first draft of the survey
+
* run nmt with monolingual data
* voice tagging  
+
* bleu computation
 +
* learn about tensorflow  
 
||
 
||
* morpheme-based nmt
+
* improve my paper
* improve nmt with monolingual data
+
* analyze experiment results
 
|-
 
|-
 
|Peilun Xiao ||
 
|Peilun Xiao ||

2017年1月3日 (二) 07:02的最后版本

Date People Last Week This Week
2017/1/3 Yang Feng
  • nmt+mn: tried to improve the nmt baseline;
  • met with problems for baseline, rulling out the factor of output order and file format and got the reason of learning rate.
  • read the code of Andy's;
  • wrote the code for bleu evaluation;
  • managed to fix the code of nmt+mn;
  • ran experiments [report]
Jiyuan Zhang
  • improved speed of prediction process
  • ran expriments:

two sytles expriments of top1_memory_model
overfitting expriments of top1_memory_model
two styles expriments of average_memory_model
overfitting expirments of average_memory_model

  • improve poem model
Andi Zhang
  • handed in previous codes to Mrs.Feng
  • help Jiyuan gather poems about tianyuan
  • help Jiyuan with his work
  • gather more poems
Shiyue Zhang
  • tried to improve rnng+mm model, but still failed [report]
  • stopped rnng work to help Teacher Feng with NMT
  • ran theano NMT code successfully, and found a problem of test
  • run and test the theano NMT model
  • try to modify tensorflow NMT model, run and test
Guli
  • run nmt with monolingual data
  • bleu computation
  • learn about tensorflow
  • improve my paper
  • analyze experiment results
Peilun Xiao
  • learned tf-idf algorithm
  • coded tf-idf alogrithm in python,but found it not worked well
  • tried to use small dataset to test the program
  • use sklearn tfidf to test the dataset