“NLP Status Report 2016-12-26”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(3位用户的3个中间修订版本未显示)
第15行: 第15行:
 
|-
 
|-
 
|Jiyuan Zhang ||
 
|Jiyuan Zhang ||
*coded tone_model,but had some trouble
+
*integrated tone_model to attention_model for insteading manul rule,but the effect wasn't good
*run global_attention_model that decodes four sentences, [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/d5/Four_local_atten.pdf four][http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/05/Five_local_attention.pdf five]generated by local_attention model
+
*replacing all_pz rule with half_pz
 +
*token a classical Chinese as input,generated poem [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/33/Story_input.pdf]
 
||  
 
||  
 
*improve poem model   
 
*improve poem model   
第24行: 第25行:
 
*coded a script for bleu scoring, which tests the five checkpoints auto created by training process and save the one with best performance
 
*coded a script for bleu scoring, which tests the five checkpoints auto created by training process and save the one with best performance
 
||
 
||
*extract encoder outputs
+
*
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
第38行: 第39行:
 
|-
 
|-
 
|Guli ||
 
|Guli ||
*read papers about Transfer learning and solving OOV
+
* finished the first draft of the survey
*conducted comparative test
+
* voice tagging 
*writing survey
+
 
||
 
||
* complete the first draft of the survey 
+
* morpheme-based nmt
 +
* improve nmt with monolingual data
 
|-
 
|-
 
|Peilun Xiao ||
 
|Peilun Xiao ||

2016年12月26日 (一) 05:16的最后版本

Date People Last Week This Week
2016/12/26 Yang Feng
  • s2smn: read six papers to fix the details of our model;
  • wrote the proposal of lexical memory and discussed the details with Teach Wang;
  • finished coding of only adding attention to the decoder and under debugging;
  • refine Moses manual [manual] ;
  • prepare the dictionary for the memory loading;
  • Huilan: documentation
Jiyuan Zhang
  • integrated tone_model to attention_model for insteading manul rule,but the effect wasn't good
  • replacing all_pz rule with half_pz
  • token a classical Chinese as input,generated poem [1]
  • improve poem model
Andi Zhang
  • coded to output encoder outputs and correspoding source & target sentences(ids in dictionaries)
  • coded a script for bleu scoring, which tests the five checkpoints auto created by training process and save the one with best performance
Shiyue Zhang
  • tried to add true action info when training gate, which got better results than no true actions, but still not very good.
  • tried different scale vectors, and found setting >=-5000 is good
  • tried to change cos to only inner product, and inner product is better than cos.
  • [report]
  • read 3 papers [[2]] [[3]] [[4]]
  • trying the joint training, which got a problem of optimization.
  • try the joint training
  • read more papers and write a summary
Guli
  • finished the first draft of the survey
  • voice tagging
  • morpheme-based nmt
  • improve nmt with monolingual data
Peilun Xiao
  • learned tf-idf algorithm
  • coded tf-idf alogrithm in python,but found it not worked well
  • tried to use small dataset to test the program
  • use sklearn tfidf to test the dataset