“NLP Status Report 2017-7-3”版本间的差异
来自cslt Wiki
(以“{| class="wikitable" !Date !! People !! Last Week !! This Week |- | rowspan="6"|2017/6/19 |Jiyuan Zhang || || |- |Aodong LI || * || * |- |Shiyue Zhang || * || *...”为内容创建页面) |
|||
| (1位用户的4个中间修订版本未显示) | |||
| 第2行: | 第2行: | ||
!Date !! People !! Last Week !! This Week | !Date !! People !! Last Week !! This Week | ||
|- | |- | ||
| − | | rowspan="6"|2017/ | + | | rowspan="6"|2017/7/3 |
|Jiyuan Zhang || | |Jiyuan Zhang || | ||
|| | || | ||
|- | |- | ||
|Aodong LI || | |Aodong LI || | ||
| − | * | + | * Tried seq2seq with or without attention model to do style transfer (cross domain) task but this didn't work due to overfitting |
| + | seq2seq with attention model: Chinese-to-English | ||
| + | vanilla seq2seq model: English-to-English (Unsupervised) | ||
| + | * Read two style controlled papers in generative model field | ||
| + | * Trained seq2seq with style code model | ||
|| | || | ||
| − | * | + | * Understand the model and mechanism mentioned in the two related papers |
| + | * Figure out new ways to do style transfer task | ||
|- | |- | ||
|Shiyue Zhang || | |Shiyue Zhang || | ||
| − | + | ||
|| | || | ||
| − | + | ||
|- | |- | ||
|Shipan Ren || | |Shipan Ren || | ||
| − | + | * read and run ViVi_NMT code | |
| + | * read the API of tensorflow | ||
| + | * debugged ViVi_NMT and upgraded code version to tensorflow1.0 | ||
| + | * found the new version saves more time,has lower complexity and better bleu than before | ||
|| | || | ||
| − | + | * test two versions of the code on small data sets (Chinese-English) and large data sets (Chinese-English) respectively | |
| + | * test two versions of the code on WMT 2014 English-to-German parallel dataset and WMT 2014 English-French dataset respectively | ||
| + | * record experimental results | ||
| + | * read paper and try to make the bleu become a little better | ||
|- | |- | ||
|} | |} | ||
2017年7月3日 (一) 04:07的最后版本
| Date | People | Last Week | This Week |
|---|---|---|---|
| 2017/7/3 | Jiyuan Zhang | ||
| Aodong LI |
seq2seq with attention model: Chinese-to-English vanilla seq2seq model: English-to-English (Unsupervised)
|
| |
| Shiyue Zhang | |||
| Shipan Ren |
|
|