“NLP Status Report 2017-7-3”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(某位用户的一个中间修订版本未显示)
第7行: 第7行:
 
|-
 
|-
 
|Aodong LI ||
 
|Aodong LI ||
 
+
* Tried seq2seq with or without attention model to do style transfer (cross domain) task but this didn't work due to overfitting
 +
  seq2seq with attention model: Chinese-to-English
 +
  vanilla seq2seq model: English-to-English (Unsupervised)
 +
* Read two style controlled papers in generative model field
 +
* Trained seq2seq with style code model
 
||
 
||
 
+
* Understand the model and mechanism mentioned in the two related papers
 +
* Figure out new ways to do style transfer task
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
第23行: 第28行:
 
||
 
||
 
* test two versions of the code on small data sets (Chinese-English) and large data sets (Chinese-English) respectively
 
* test two versions of the code on small data sets (Chinese-English) and large data sets (Chinese-English) respectively
* test two versions of the code on WMT 2014 English-to-German parallel data sets and WMT 2014 English-French dataset respectively
+
* test two versions of the code on WMT 2014 English-to-German parallel dataset and WMT 2014 English-French dataset respectively
 
+
* record experimental results
 +
* read paper and try to make the bleu become a little better
 
|-
 
|-
 
      
 
      
  
 
|}
 
|}

2017年7月3日 (一) 04:07的最后版本

Date People Last Week This Week
2017/7/3 Jiyuan Zhang
Aodong LI
  • Tried seq2seq with or without attention model to do style transfer (cross domain) task but this didn't work due to overfitting
 seq2seq with attention model: Chinese-to-English
 vanilla seq2seq model: English-to-English (Unsupervised)
  • Read two style controlled papers in generative model field
  • Trained seq2seq with style code model
  • Understand the model and mechanism mentioned in the two related papers
  • Figure out new ways to do style transfer task
Shiyue Zhang
Shipan Ren
  • read and run ViVi_NMT code
  • read the API of tensorflow
  • debugged ViVi_NMT and upgraded code version to tensorflow1.0
  • found the new version saves more time,has lower complexity and better bleu than before
  • test two versions of the code on small data sets (Chinese-English) and large data sets (Chinese-English) respectively
  • test two versions of the code on WMT 2014 English-to-German parallel dataset and WMT 2014 English-French dataset respectively
  • record experimental results
  • read paper and try to make the bleu become a little better