“NLP Status Report 2017-7-3”版本间的差异
来自cslt Wiki
| 第7行: | 第7行: | ||
|- | |- | ||
|Aodong LI || | |Aodong LI || | ||
| − | + | * Tried seq2seq with or without attention model to do style transfer (cross domain) task but this didn't work due to overfitting | |
| + | seq2seq with attention model: Chinese-to-English | ||
| + | vanilla seq2seq model: English-to-English (Unsupervised) | ||
| + | * Read two style controlled papers in generative model field | ||
| + | * Trained seq2seq with style code model | ||
|| | || | ||
| − | + | * Understand the model and mechanism mentioned in the two related papers | |
| + | * Figure out new ways to do style transfer task | ||
|- | |- | ||
|Shiyue Zhang || | |Shiyue Zhang || | ||
2017年7月3日 (一) 04:07的最后版本
| Date | People | Last Week | This Week |
|---|---|---|---|
| 2017/7/3 | Jiyuan Zhang | ||
| Aodong LI |
seq2seq with attention model: Chinese-to-English vanilla seq2seq model: English-to-English (Unsupervised)
|
| |
| Shiyue Zhang | |||
| Shipan Ren |
|
|