“Schedule”版本间的差异
来自cslt Wiki
(→Daily Report) |
(→Daily Report) |
||
第438行: | 第438行: | ||
* the best result of model with 40 batch size and with add(attn_1, attn_2) is 30.52 | * the best result of model with 40 batch size and with add(attn_1, attn_2) is 30.52 | ||
|- | |- | ||
− | | rowspan="1"|2017/06/23 | + | | rowspan="1"|2017/06/05 |
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/06 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/07 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/08 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/09 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/12 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/13 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/14 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/15 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | * Read paper about MT involving grammar | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/16 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Prepare for APSIPA paper | ||
+ | * Read paper about MT involving grammar | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/19 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Completed APSIPA paper | ||
+ | * Took new task in style translation | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/20 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Tried synonyms substitution | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/21 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Tried post edit like synonyms substitution but this didn't work | ||
+ | |- | ||
+ | | rowspan="1"|2017/06/22 | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Trained a GRU language model to determine similar word | ||
+ | |- | ||
+ | | rowspan="2"|2017/06/23 | ||
|Shipan Ren || 10:00 || 21:00 || 11 || | |Shipan Ren || 10:00 || 21:00 || 11 || | ||
* read neural machine translation paper | * read neural machine translation paper | ||
* read and run tf_translate code | * read and run tf_translate code | ||
|- | |- | ||
− | + | |Aodong Li || 10:00 || 19:00 || 8 || | |
− | | rowspan=" | + | * Trained a GRU language model to determine similar word |
+ | * This didn't work because semantics is not captured | ||
+ | |- | ||
+ | | rowspan="2"|2017/06/26 | ||
|Shipan Ren || 10:00 || 21:00 || 11 || | |Shipan Ren || 10:00 || 21:00 || 11 || | ||
* read paper:LSTM Neural Networks for Language Modeling | * read paper:LSTM Neural Networks for Language Modeling | ||
* read and run ViVi_NMT code | * read and run ViVi_NMT code | ||
|- | |- | ||
− | + | |Aodong Li || 10:00 || 19:00 || 8 || | |
− | | rowspan=" | + | * Tried to figure out new ways to change the text style |
+ | |- | ||
+ | | rowspan="2"|2017/06/27 | ||
|Shipan Ren || 10:00 || 20:00 || 10 || | |Shipan Ren || 10:00 || 20:00 || 10 || | ||
* read the API of tensorflow | * read the API of tensorflow | ||
* debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 | * debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 | ||
|- | |- | ||
− | | rowspan=" | + | |Aodong Li || 10:00 || 19:00 || 8 || |
+ | * Trained seq2seq model to solve this problem | ||
+ | * Semantics are stored in fixed-length vectors by a encoder and a decoder generate sequences on this vector | ||
+ | |- | ||
+ | | rowspan="2"|2017/06/28 | ||
|Shipan Ren || 10:00 || 19:00 || 9 || | |Shipan Ren || 10:00 || 19:00 || 9 || | ||
* debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 (on server) | * debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 (on server) | ||
* installed tensorflow0.1 and tensorflow1.0 on my pc and debugged ViVi_NMT | * installed tensorflow0.1 and tensorflow1.0 on my pc and debugged ViVi_NMT | ||
|- | |- | ||
− | + | |Aodong Li || 10:00 || 19:00 || 8 || | |
− | | rowspan=" | + | * Cross-domain seq2seq w/o attention and w/ attention models didn't work because of overfitting |
+ | |- | ||
+ | | rowspan="2"|2017/06/29 | ||
|Shipan Ren || 10:00 || 20:00 || 10 || | |Shipan Ren || 10:00 || 20:00 || 10 || | ||
* read the API of tensorflow | * read the API of tensorflow | ||
* debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 (on server) | * debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 (on server) | ||
|- | |- | ||
− | + | |Aodong Li || 10:00 || 19:00 || 8 || | |
− | | rowspan=" | + | * Read style transfer papers |
+ | |- | ||
+ | | rowspan="2"|2017/06/30 | ||
|Shipan Ren || 10:00 || 24:00 || 14 || | |Shipan Ren || 10:00 || 24:00 || 14 || | ||
* debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 (on server) | * debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 (on server) | ||
* accomplished this task | * accomplished this task | ||
* found the new version saves more time,has lower complexity and better bleu than before | * found the new version saves more time,has lower complexity and better bleu than before | ||
+ | |- | ||
+ | |Aodong Li || 10:00 || 19:00 || 8 || | ||
+ | * Read style transfer papers | ||
|- | |- | ||
|} | |} |
2017年7月2日 (日) 13:50的版本
目录
NLP Schedule
Members
Current Members
- Yang Feng (冯洋)
- Jiyuan Zhang (张记袁)
- Aodong Li (李傲冬)
- Andi Zhang (张安迪)
- Shiyue Zhang (张诗悦)
- Li Gu (古丽)
- Peilun Xiao (肖培伦)
- Shipan Ren (任师攀)
Former Members
- Chao Xing (邢超) : FreeNeb
- Rong Liu (刘荣) : 优酷
- Xiaoxi Wang (王晓曦) : 图灵机器人
- Xi Ma (马习) : 清华大学研究生
- Tianyi Luo (骆天一) : phd candidate in University of California Santa Cruz
- Qixin Wang (王琪鑫) : MA candidate in University of California
- DongXu Zhang (张东旭): --
- Yiqiao Pan (潘一桥) : MA candidate in University of Sydney
- Shiyao Li (李诗瑶) : BUPT
- Aiting Liu (刘艾婷) : BUPT
Work Progress
Daily Report
Date | Person | start | leave | hours | status |
---|---|---|---|---|---|
2017/04/02 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/03 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/04 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/05 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/06 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/07 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/08 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/09 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/10 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/11 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/12 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/13 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/14 | Andy Zhang | 9:30 | 18:30 | 8 |
|
Peilun Xiao | |||||
2017/04/15 | Andy Zhang | 9:00 | 15:00 | 6 |
|
Peilun Xiao | |||||
2017/04/18 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/04/19 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/04/20 | Aodong Li | 12:00 | 20:00 | 8 |
|
2017/04/21 | Aodong Li | 12:00 | 20:00 | 8 |
|
2017/04/24 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/04/25 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/04/26 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/04/27 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/04/28 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/04/30 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/05/01 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/05/02 | Aodong Li | 11:00 | 20:00 | 8 |
|
2017/05/06 | Aodong Li | 14:20 | 17:20 | 3 |
|
2017/05/07 | Aodong Li | 13:30 | 22:00 | 8 |
|
2017/05/08 | Aodong Li | 11:30 | 21:00 | 8 |
|
2017/05/09 | Aodong Li | 13:00 | 22:00 | 9 |
small data, 1st and 2nd translator uses the same training data, 2nd translator uses random initialized embedding
BASELINE: 43.87 best result of our model: 42.56 |
2017/05/10 | Shipan Ren | 9:00 | 20:00 | 11 |
|
2017/05/10 | Aodong Li | 13:30 | 22:00 | 8 |
small data, 1st and 2nd translator uses the different training data, counting 22000 and 22017 seperately 2nd translator uses random initialized embedding
BASELINE: 36.67 (36.67 is the model at 4750 updates, but we use model at 3000 updates to prevent the case of overfitting, to generate the 2nd translator's training data, for which the BLEU is 34.96) best result of our model: 29.81 This may suggest that that using either the same training data with 1st translator or different one won't influence 2nd translator's performance, instead, using the same one may be better, at least from results. But I have to give a consideration of a smaller size of training data compared to yesterday's model.
|
2017/05/11 | Shipan Ren | 10:00 | 19:30 | 9.5 |
|
2017/05/11 | Aodong Li | 13:00 | 21:00 | 8 |
small data, 1st and 2nd translator uses the same training data, 2nd translator uses constant untrainable embedding imported from 1st translator's decoder
BASELINE: 43.87 best result of our model: 43.48 Experiments show that this kind of series or cascade model will definitely impair the final perfor- mance due to information loss as the information flows through the network from end to end. Decoder's smaller vocabulary size compared to encoder's demonstrate this (9000+ -> 6000+). The intention of this experiment is looking for a map to solve meaning shift using 2nd translator, but result of whether the map is learned or not is obscured by the smaller vocab size phenomenon.
|
2017/05/12 | Aodong Li | 13:00 | 21:00 | 8 |
|
2017/05/13 | Shipan Ren | 10:00 | 19:00 | 9 |
|
2017/05/14 | Aodong Li | 10:00 | 20:00 | 9 |
small data, 2nd translator uses as training data the concat(Chinese, machine translated English), 2nd translator uses random initialized embedding
BASELINE: 43.87 best result of our model: 43.53
|
2017/05/15 | Shipan Ren | 9:30 | 19:00 | 9.5 |
|
2017/05/17 | Shipan Ren | 9:30 | 19:30 | 10 |
|
Aodong Li | 13:30 | 24:00 | 9 |
| |
2017/05/18 | Shipan Ren | 10:00 | 19:00 | 9 |
|
Aodong Li | 12:30 | 21:00 | 8 |
| |
2017/05/19 | Aodong Li | 12:30 | 20:30 | 8 |
|
2017/05/21 | Aodong Li | 10:30 | 18:30 | 8 |
hidden_size = 700 (500 in prior) emb_size = 510 (310 in prior) small data, 2nd translator uses as training data the concat(Chinese, machine translated English), 2nd translator uses random initialized embedding
BASELINE: 43.87 best result of our model: 45.21 But only one checkpoint outperforms the baseline, the other results are commonly under 43.1
|
2017/05/22 | Aodong Li | 14:00 | 22:00 | 8 |
|
2017/05/23 | Aodong Li | 13:00 | 21:30 | 8 |
hidden_size = 700 emb_size = 510 learning_rate = 0.0005 (0.001 in prior) small data, 2nd translator uses as training data the concat(Chinese, machine translated English), 2nd translator uses random initialized embedding
BASELINE: 43.87 best result of our model: 42.19 Overfitting? In overall, the 2nd translator performs worse than baseline
hidden_size = 500 emb_size = 310 learning_rate = 0.001 small data, double-decoder model with joint loss which means the final loss = 1st decoder's loss + 2nd decoder's loss
BASELINE: 43.87 best result of our model: 39.04 The 1st decoder's output is generally better than 2nd decoder's output. The reason may be that the second decoder only learns from the first decoder's hidden states because their states are almost the same.
The reason why double-decoder without joint loss generalizes very bad is that the gap between force teaching mechanism (training process) and beam search mechanism (decoding process) propagates and expands the error to the output end, which destroys the model when decoding.
Try to train double-decoder model without joint loss but with beam search on 1st decoder. |
2017/05/24 | Aodong Li | 13:00 | 21:30 | 8 |
|
2017/05/24 | Shipan Ren | 10:00 | 20:00 | 10 |
|
2017/05/25 | Shipan Ren | 9:30 | 18:30 | 9 |
|
Aodong Li | 13:00 | 22:00 | 9 |
| |
2017/05/27 | Shipan Ren | 9:30 | 18:30 | 9 |
|
2017/05/28 | Aodong Li | 15:00 | 22:00 | 7 |
hidden_size = 500 emb_size = 310 learning_rate = 0.001 small data, 2nd translator uses as training data both Chinese and machine translated English Chinese and English use different encoders and different attention final_attn = attn_1 + attn_2 2nd translator uses random initialized embedding
BASELINE: 43.87 when decoding: final_attn = attn_1 + attn_2 best result of our model: 43.50 final_attn = 2/3attn_1 + 4/3attn_2 best result of our model: 41.22 final_attn = 4/3attn_1 + 2/3attn_2 best result of our model: 43.58 |
2017/05/30 | Aodong Li | 15:00 | 21:00 | 6 |
hidden_size = 500 emb_size = 310 learning_rate = 0.001 small data, 2nd translator uses as training data both Chinese and machine translated English Chinese and English use different encoders and different attention final_attn = 2/3attn_1 + 4/3attn_2 2nd translator uses random initialized embedding
BASELINE: 43.87 best result of our model: 42.36
final_attn = 2/3attn_1 + 4/3attn_2 2nd translator uses constant initialized embedding
BASELINE: 43.87 best result of our model: 45.32
final_attn = attn_1 + attn_2 2nd translator uses constant initialized embedding
BASELINE: 43.87 best result of our model: 45.41 and it seems more stable |
2017/05/31 | Shipan Ren | 10:00 | 19:30 | 9.5 |
|
Aodong Li | 12:00 | 20:30 | 8.5 |
final_attn = 4/3attn_1 + 2/3attn_2 2nd translator uses constant initialized embedding
BASELINE: 43.87 best result of our model: 45.79
| |
2017/06/01 | Aodong Li | 13:00 | 24:00 | 11 |
|
2017/06/02 | Aodong Li | 13:00 | 22:00 | 9 |
|
2017/06/03 | Aodong Li | 13:00 | 21:00 | 8 |
|
2017/06/05 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/06 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/07 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/08 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/09 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/12 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/13 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/14 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/15 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/16 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/19 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/20 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/21 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/22 | Aodong Li | 10:00 | 19:00 | 8 |
|
2017/06/23 | Shipan Ren | 10:00 | 21:00 | 11 |
|
Aodong Li | 10:00 | 19:00 | 8 |
| |
2017/06/26 | Shipan Ren | 10:00 | 21:00 | 11 |
|
Aodong Li | 10:00 | 19:00 | 8 |
| |
2017/06/27 | Shipan Ren | 10:00 | 20:00 | 10 |
|
Aodong Li | 10:00 | 19:00 | 8 |
| |
2017/06/28 | Shipan Ren | 10:00 | 19:00 | 9 |
|
Aodong Li | 10:00 | 19:00 | 8 |
| |
2017/06/29 | Shipan Ren | 10:00 | 20:00 | 10 |
|
Aodong Li | 10:00 | 19:00 | 8 |
| |
2017/06/30 | Shipan Ren | 10:00 | 24:00 | 14 |
|
Aodong Li | 10:00 | 19:00 | 8 |
|
Time Off Table
Date | Yang Feng | Jiyuan Zhang |
---|