“NLP Status Report 2017-1-3”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“{| class="wikitable" !Date !! People !! Last Week !! This Week |- | rowspan="6"|2016/12/26 |Yang Feng || *s2smn: read six papers to fix the details of our model;...”为内容创建页面)
 
第4行: 第4行:
 
| rowspan="6"|2016/12/26
 
| rowspan="6"|2016/12/26
 
|Yang Feng ||
 
|Yang Feng ||
*[[s2smn:]] read six papers to fix the details of our model;
+
*[[s2smn:]] tried to improve the nmt baseline;
*wrote the proposal of lexical memory and discussed the details with Teach Wang;
+
*read the code of Andy's;
*finished coding of only adding attention to the decoder and under debugging;
+
*write the code for bleu test;
*refine Moses manual [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/92/Moses%E6%93%8D%E4%BD%9C%E6%89%8B%E5%86%8C--%E5%86%AF%E6%B4%8B.pdf manual]] ;
+
*finished the code of nmt+mn;
*prepare the dictionary for the memory loading;
+
*ran experiments;
*[[Huilan:]] documentation
+
 
||
 
||
*[[s2smn:]] run the ecperiments.
+
*[[s2smn:]] do further ecperiments.
*[[rnng+mn:]] try to find the problem.
+
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||

2017年1月3日 (二) 05:46的版本

Date People Last Week This Week
2016/12/26 Yang Feng
  • s2smn: tried to improve the nmt baseline;
  • read the code of Andy's;
  • write the code for bleu test;
  • finished the code of nmt+mn;
  • ran experiments;
  • s2smn: do further ecperiments.
Jiyuan Zhang
  • integrated tone_model to attention_model for insteading manul rule,but the effect wasn't good
  • replacing all_pz rule with half_pz
  • token a classical Chinese as input,generated poem [1]
  • improve poem model
Andi Zhang
  • coded to output encoder outputs and correspoding source & target sentences(ids in dictionaries)
  • coded a script for bleu scoring, which tests the five checkpoints auto created by training process and save the one with best performance
Shiyue Zhang
  • tried to add true action info when training gate, which got better results than no true actions, but still not very good.
  • tried different scale vectors, and found setting >=-5000 is good
  • tried to change cos to only inner product, and inner product is better than cos.
  • [report]
  • read 3 papers [[2]] [[3]] [[4]]
  • trying the joint training, which got a problem of optimization.
  • try the joint training
  • read more papers and write a summary
Guli
  • finished the first draft of the survey
  • voice tagging
  • morpheme-based nmt
  • improve nmt with monolingual data
Peilun Xiao
  • learned tf-idf algorithm
  • coded tf-idf alogrithm in python,but found it not worked well
  • tried to use small dataset to test the program
  • use sklearn tfidf to test the dataset