Date |
Person |
start |
leave |
hours |
status
|
2017/04/02
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/03
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/04
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/05
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/06
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/07
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/08
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/09
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/10
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/11
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/12
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/13
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/14
|
Andy Zhang |
9:30 |
18:30 |
8 |
|
Peilun Xiao |
|
|
|
|
2017/04/15
|
Andy Zhang |
9:00 |
15:00 |
6 |
|
Peilun Xiao |
|
|
|
|
2017/04/18
|
Aodong Li |
11:00 |
20:00 |
8 |
- Pick up new task in news generation and do literature review
|
2017/04/19
|
Aodong Li |
11:00 |
20:00 |
8 |
|
2017/04/20
|
Aodong Li |
12:00 |
20:00 |
8 |
|
2017/04/21
|
Aodong Li |
12:00 |
20:00 |
8 |
|
2017/04/24
|
Aodong Li |
11:00 |
20:00 |
8 |
- Adjust literature review focus
|
2017/04/25
|
Aodong Li |
11:00 |
20:00 |
8 |
|
2017/04/26
|
Aodong Li |
11:00 |
20:00 |
8 |
|
2017/04/27
|
Aodong Li |
11:00 |
20:00 |
8 |
- Try to reproduce sc-lstm work
|
2017/04/28
|
Aodong Li |
11:00 |
20:00 |
8 |
- Transfer to new task in machine translation and do literature review
|
2017/04/30
|
Aodong Li |
11:00 |
20:00 |
8 |
|
2017/05/01
|
Aodong Li |
11:00 |
20:00 |
8 |
|
2017/05/02
|
Aodong Li |
11:00 |
20:00 |
8 |
- Literature review and code review
|
2017/05/06
|
Aodong Li |
14:20 |
17:20 |
3 |
|
2017/05/07
|
Aodong Li |
13:30 |
22:00 |
8 |
- Code review and experiment started, but version discrepancy encountered
|
2017/05/08
|
Aodong Li |
11:30 |
21:00 |
8 |
- Code review and version discrepancy solved
|
2017/05/09
|
Aodong Li |
13:00 |
22:00 |
9 |
- Code review and experiment
- details about experiment:
small data,
1st and 2nd translator uses the same training data,
2nd translator uses random initialized embedding
BASELINE: 43.87
best result of our model: 42.56
|
2017/05/10
|
Shipan Ren |
9:00 |
20:00 |
11 |
- Entry procedures
- Machine Translation paper reading
|
2017/05/10
|
Aodong Li |
13:30 |
22:00 |
8 |
small data,
1st and 2nd translator uses the different training data, counting 22000 and 22017 seperately
2nd translator uses random initialized embedding
BASELINE: 36.67 (36.67 is the model at 4750 updates, but we use model at 3000 updates to
prevent the case of overfitting, to generate the 2nd translator's training data, for
which the BLEU is 34.96)
best result of our model: 29.81
This may suggest that that using either the same training data with 1st translator or different
one won't influence 2nd translator's performance, instead, using the same one may
be better, at least from results. But I have to give a consideration of a smaller size
of training data compared to yesterday's model.
- code 2nd translator with constant embedding
|
2017/05/11
|
Shipan Ren |
10:00 |
19:30 |
9.5 |
- Configure environment
- Run tf_translate code
- Read Machine Translation paper
|
2017/05/11
|
Aodong Li |
13:00 |
21:00 |
8 |
small data,
1st and 2nd translator uses the same training data,
2nd translator uses constant untrainable embedding imported from 1st translator's decoder
BASELINE: 43.87
best result of our model: 43.48
Experiments show that this kind of series or cascade model will definitely impair the final perfor-
mance due to information loss as the information flows through the network from
end to end. Decoder's smaller vocabulary size compared to encoder's demonstrate
this (9000+ -> 6000+).
The intention of this experiment is looking for a map to solve meaning shift using 2nd translator,
but result of whether the map is learned or not is obscured by the smaller vocab size
phenomenon.
- literature review on hierarchical machine translation
|
2017/05/12
|
Aodong Li |
13:00 |
21:00 |
8 |
- Code double decoding model and read multilingual MT paper
|
2017/05/13
|
Shipan Ren |
10:00 |
19:00 |
9 |
- read machine translation paper
- learne lstm model and seq2seq model
|
2017/05/14
|
Aodong Li |
10:00 |
20:00 |
9 |
- Code double decoding model and experiment
- details about experiment:
small data,
2nd translator uses as training data the concat(Chinese, machine translated English),
2nd translator uses random initialized embedding
BASELINE: 43.87
best result of our model: 43.53
- NEXT: 2nd translator uses trained constant embedding
|
2017/05/15
|
Shipan Ren |
9:30 |
19:00 |
9.5 |
- understand the difference between lstm model and gru model
- read the implement code of seq2seq model
|
2017/05/17
|
Shipan Ren |
9:30 |
19:30 |
10 |
- read neural machine translation paper
- read tf_translate code
|
Aodong Li |
13:30 |
24:00 |
9 |
- code and debug double-decoder model
- alter 2017/05/14 model's size and will try after nips
|
2017/05/18
|
Shipan Ren |
10:00 |
19:00 |
9 |
- read neural machine translation paper
- read tf_translate code
|
Aodong Li |
12:30 |
21:00 |
8 |
- train double-decoder model on small data set but encounter decode bugs
|
2017/05/19
|
Aodong Li |
12:30 |
20:30 |
8 |
- debug double-decoder model
- the model performs well on develop set, but performs badly on test data. I want to figure out the reason.
|
2017/05/21
|
Aodong Li |
10:30 |
18:30 |
8 |
- details about experiment:
hidden_size = 700 (500 in prior)
emb_size = 510 (310 in prior)
small data,
2nd translator uses as training data the concat(Chinese, machine translated English),
2nd translator uses random initialized embedding
BASELINE: 43.87
best result of our model: 45.21
But only one checkpoint outperforms the baseline, the other results are commonly under 43.1
- debug double-decoder model
|
2017/05/22
|
Aodong Li |
14:00 |
22:00 |
8 |
- double-decoder without joint loss generalizes very bad
- i'm trying double-decoder model with joint loss
|
2017/05/23
|
Aodong Li |
13:00 |
21:30 |
8 |
- details about experiment 1:
hidden_size = 700
emb_size = 510
learning_rate = 0.0005 (0.001 in prior)
small data,
2nd translator uses as training data the concat(Chinese, machine translated English),
2nd translator uses random initialized embedding
BASELINE: 43.87
best result of our model: 42.19
Overfitting? In overall, the 2nd translator performs worse than baseline
- details about experiment 2:
hidden_size = 500
emb_size = 310
learning_rate = 0.001
small data,
double-decoder model with joint loss which means the final loss = 1st decoder's loss + 2nd
decoder's loss
BASELINE: 43.87
best result of our model: 39.04
The 1st decoder's output is generally better than 2nd decoder's output. The reason may be that
the second decoder only learns from the first decoder's hidden states because their states are
almost the same.
The reason why double-decoder without joint loss generalizes very bad is that the gap between
force teaching mechanism (training process) and beam search mechanism (decoding process)
propagates and expands the error to the output end, which destroys the model when decoding.
Try to train double-decoder model without joint loss but with beam search on 1st decoder.
|
2017/05/24
|
Aodong Li |
13:00 |
21:30 |
8 |
- code double-attention one-decoder model
- code double-decoder model
|
2017/05/24
|
Shipan Ren |
10:00 |
20:00 |
10 |
- read neural machine translation paper
- read tf_translate code
|
2017/05/25
|
Shipan Ren |
9:30 |
18:30 |
9 |
- write document of tf_translate project
- read neural machine translation paper
- read tf_translate code
|
Aodong Li |
13:00 |
22:00 |
9 |
- code and debug double attention model
|
2017/05/27
|
Shipan Ren |
9:30 |
18:30 |
9 |
- read tf_translate code
- write document of tf_translate project
|
2017/05/28
|
Aodong Li |
15:00 |
22:00 |
7 |
- details about experiment:
hidden_size = 500
emb_size = 310
learning_rate = 0.001
small data,
2nd translator uses as training data both Chinese and machine translated English
Chinese and English use different encoders and different attention
final_attn = attn_1 + attn_2
2nd translator uses random initialized embedding
BASELINE: 43.87
when decoding:
final_attn = attn_1 + attn_2 best result of our model: 43.50
final_attn = 2/3attn_1 + 4/3attn_2 best result of our model: 41.22
final_attn = 4/3attn_1 + 2/3attn_2 best result of our model: 43.58
|
2017/05/30
|
Aodong Li |
15:00 |
21:00 |
6 |
- details about experiment 1:
hidden_size = 500
emb_size = 310
learning_rate = 0.001
small data,
2nd translator uses as training data both Chinese and machine translated English
Chinese and English use different encoders and different attention
final_attn = 2/3attn_1 + 4/3attn_2
2nd translator uses random initialized embedding
BASELINE: 43.87
best result of our model: 42.36
- details about experiment 2:
final_attn = 2/3attn_1 + 4/3attn_2
2nd translator uses constant initialized embedding
BASELINE: 43.87
best result of our model: 45.32
- details about experiment 3:
final_attn = attn_1 + attn_2
2nd translator uses constant initialized embedding
BASELINE: 43.87
best result of our model: 45.41 and it seems more stable
|
2017/05/31
|
Shipan Ren |
10:00 |
19:30 |
9.5 |
- run and test tf_translate code
- write document of tf_translate project
|
Aodong Li |
12:00 |
20:30 |
8.5 |
- details about experiment 1:
final_attn = 4/3attn_1 + 2/3attn_2
2nd translator uses constant initialized embedding
BASELINE: 43.87
best result of our model: 45.79
- That only make English word embedding at encoder constant and train all the other embedding and parameters achieves an even higher bleu score 45.98 and the results are stable.
- The quality of English embedding at encoder plays an pivotal role in this model.
- Preparation of big data.
|
2017/06/01
|
Aodong Li |
13:00 |
24:00 |
11 |
- Only make the English encoder's embedding constant -- 45.98
- Only initialize the English encoder's embedding and then finetune it -- 46.06
- Share the attention mechanism and then directly add them -- 46.20
- Run double-attention model on large data
|
2017/06/02
|
Aodong Li |
13:00 |
22:00 |
9 |
- Baseline bleu on large data is 30.83 with 30000 output vocab
- Our best result is 31.53 with 20000 output vocab
|
2017/06/03
|
Aodong Li |
13:00 |
21:00 |
8 |
- Train the model with 40 batch size and with concat(attn_1, attn_2)
- the best result of model with 40 batch size and with add(attn_1, attn_2) is 30.52
|
2017/06/05
|
Aodong Li |
10:00 |
19:00 |
8 |
|
2017/06/06
|
Aodong Li |
10:00 |
19:00 |
8 |
|
2017/06/07
|
Aodong Li |
10:00 |
19:00 |
8 |
|
2017/06/08
|
Aodong Li |
10:00 |
19:00 |
8 |
|
2017/06/09
|
Aodong Li |
10:00 |
19:00 |
8 |
|
2017/06/12
|
Aodong Li |
10:00 |
19:00 |
8 |
|
2017/06/13
|
Aodong Li |
10:00 |
19:00 |
8 |
|
2017/06/14
|
Aodong Li |
10:00 |
19:00 |
8 |
|
2017/06/15
|
Aodong Li |
10:00 |
19:00 |
8 |
- Prepare for APSIPA paper
- Read paper about MT involving grammar
|
2017/06/16
|
Aodong Li |
10:00 |
19:00 |
8 |
- Prepare for APSIPA paper
- Read paper about MT involving grammar
|
2017/06/19
|
Aodong Li |
10:00 |
19:00 |
8 |
- Completed APSIPA paper
- Took new task in style translation
|
2017/06/20
|
Aodong Li |
10:00 |
19:00 |
8 |
- Tried synonyms substitution
|
2017/06/21
|
Aodong Li |
10:00 |
19:00 |
8 |
- Tried post edit like synonyms substitution but this didn't work
|
2017/06/22
|
Aodong Li |
10:00 |
19:00 |
8 |
- Trained a GRU language model to determine similar word
|
2017/06/23
|
Shipan Ren |
10:00 |
21:00 |
11 |
- read neural machine translation paper
- read and run tf_translate code
|
Aodong Li |
10:00 |
19:00 |
8 |
- Trained a GRU language model to determine similar word
- This didn't work because semantics is not captured
|
2017/06/26
|
Shipan Ren |
10:00 |
21:00 |
11 |
- read paper:LSTM Neural Networks for Language Modeling
- read and run ViVi_NMT code
|
Aodong Li |
10:00 |
19:00 |
8 |
- Tried to figure out new ways to change the text style
|
2017/06/27
|
Shipan Ren |
10:00 |
20:00 |
10 |
- read the API of tensorflow
- debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0
|
Aodong Li |
10:00 |
19:00 |
8 |
- Trained seq2seq model to solve this problem
- Semantics are stored in fixed-length vectors by a encoder and a decoder generate sequences on this vector
|
2017/06/28
|
Shipan Ren |
10:00 |
19:00 |
9 |
- debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 (on server)
- installed tensorflow0.1 and tensorflow1.0 on my pc and debugged ViVi_NMT
|
Aodong Li |
10:00 |
19:00 |
8 |
- Cross-domain seq2seq w/o attention and w/ attention models didn't work because of overfitting
|
2017/06/29
|
Shipan Ren |
10:00 |
20:00 |
10 |
- read the API of tensorflow
- debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 (on server)
|
Aodong Li |
10:00 |
19:00 |
8 |
- Read style transfer papers
|
2017/06/30
|
Shipan Ren |
10:00 |
24:00 |
14 |
- debugged ViVi_NMT and tried to upgrade code version to tensorflow1.0 (on server)
- accomplished this task
- found the new version saves more time,has lower complexity and better bleu than before
|
Aodong Li |
10:00 |
19:00 |
8 |
- Read style transfer papers
|
2017/07/03
|
Shipan Ren |
9:00 |
21:00 |
12 |
- run two versions of the code on small data sets (Chinese-English)
- tested these checkpoint
|
2017/07/04
|
Shipan Ren |
9:00 |
21:00 |
12 |
- recorded experimental results
- found version 1.0 of the code save more training time, has less complexity and these two version of the code has a similar Bleu value
- found that the Bleu is still good when the model is over fitting
- reason: the test set and training set are similar in content and style on small data set
|
2017/07/05
|
Shipan Ren |
9:00 |
21:00 |
12 |
- run two versions of the code on big data sets (Chinese-English)
- read NMT papers
|
2017/07/06
|
Shipan Ren |
9:00 |
21:00 |
12 |
- out of memory(OOM) error occurred when version 0.1 of code was trained using large data set,but version 1.0 worked
- reason: improper distribution of resources by the tensorflow0.1 version leads to exhaustion of memory resources
- I've tried many times, and version 0.1 worked
|
2017/07/07
|
Shipan Ren |
9:00 |
21:00 |
12 |
- tested these checkpoints and recorded experimental results
- the version 1.0 code saved 0.06 second per step than the version 0.1 code
|
2017/07/08
|
Shipan Ren |
9:00 |
21:00 |
12 |
- downloaded the wmt2014 data set
- used the English-French data set to run the code and found the translation is not good
- reason:no data preprocessing is done
|
2017/07/21
|
Jiayu Guo |
10:00 |
23:00 |
13 |
|
2017/07/25
|
Jiayu Guo |
9:00 |
23:00 |
14 |
|
2017/07/26
|
Jiayu Guo |
10:00 |
24:00 |
14 |
|
2017/07/27
|
Jiayu Guo |
10:00 |
24:00 |
14 |
|
2017/07/28
|
Jiayu Guo |
9:00 |
24:00 |
15 |
|
|
2017/07/31
|
Jiayu Guo |
10:00 |
23:00 |
13 |
- split ancient language text to single word
|
|
2017/08/1
|
Jiayu Guo |
10:00 |
23:00 |
13 |
|
|
2017/08/2
|
Jiayu Guo |
10:00 |
23:00 |
13 |
|
2017/08/3
|
Jiayu Guo |
10:00 |
23:00 |
13 |
|
2017/08/4
|
Jiayu Guo |
10:00 |
23:00 |
13 |
|
}
Time Off Table
Date |
Yang Feng |
Jiyuan Zhang
|
Past progress
nlp-progress 2017/03
nlp-progress 2017/02
nlp-progress 2017/01
nlp-progress 2016/12
nlp-progress 2016/11
nlp-progress 2016/10
nlp-progress 2016/09
nlp-progress 2016/08
nlp-progress 2016/05-07
nlp-progress 2016/04