“NLP Status Report 2017-7-10”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第9行: 第9行:
 
|-
 
|-
 
|Aodong LI ||
 
|Aodong LI ||
* Tried seq2seq with or without attention model to do style transfer (cross domain) task but this didn't work due to overfitting
+
* Tried a seq2seq with style code model but it didn't work.
  seq2seq with attention model: Chinese-to-English
+
* Coding attention-based seq2seq NMT in shallow fusion with a language model.
  vanilla seq2seq model: English-to-English (Unsupervised)
+
* Read two style controlled papers in generative model field
+
* Trained seq2seq with style code model
+
 
||
 
||
* Understand the model and mechanism mentioned in the two related papers
+
* Complete coding and have a try.
* Figure out new ways to do style transfer task
+
* Find more monolingual corpus and upgrade the model.
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  

2017年7月10日 (一) 04:53的版本

Date People Last Week This Week
2017/7/3 Jiyuan Zhang
  • reproduced the couplet model using moses
  • continue to modify the couplet
Aodong LI
  • Tried a seq2seq with style code model but it didn't work.
  • Coding attention-based seq2seq NMT in shallow fusion with a language model.
  • Complete coding and have a try.
  • Find more monolingual corpus and upgrade the model.
Shiyue Zhang
Shipan Ren
  • read and run ViVi_NMT code
  • read the API of tensorflow
  • debugged ViVi_NMT and upgraded code version to tensorflow1.0
  • found the new version saves more time,has lower complexity and better bleu than before
  • test two versions of the code on small data sets (Chinese-English) and large data sets (Chinese-English) respectively
  • test two versions of the code on WMT 2014 English-to-German parallel dataset and WMT 2014 English-French dataset respectively
  • record experimental results
  • read paper and try to make the bleu become a little better