|
|
| (2位用户的6个中间修订版本未显示) |
| 第14行: |
第14行: |
| | |- | | |- |
| | |Jiyuan Zhang || | | |Jiyuan Zhang || |
| − | *attempted to use memory model to improve the atten model of bad effect | + | *coded tone_model,but had some trouble |
| − | *With the vernacular as the input,generated poem by local atten model[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/Local_atten_resluts.pdf] | + | *run global_attention_model that decodes four sentences, [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/d5/Four_local_atten.pdf four][http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/05/Five_local_attention.pdf five]generated by local_attention model |
| − | *Modified working mechanism of memory model(top1 to average)
| + | |
| − | *help andi
| + | |
| | || | | || |
| | *improve poem model | | *improve poem model |
| | |- | | |- |
| | |Andi Zhang || | | |Andi Zhang || |
| − | *tried to modify the wrong softmax, but abandoned at last | + | *coded to output encoder outputs and correspoding source & target sentences(ids in dictionaries) |
| − | *added bleu scoring part | + | *coded a script for bleu scoring, which tests the five checkpoints auto created by training process and save the one with best performance |
| | || | | || |
| | *extract encoder outputs | | *extract encoder outputs |
| 第46行: |
第44行: |
| | |- | | |- |
| | |Peilun Xiao || | | |Peilun Xiao || |
| − | *Read a paper about document classification wiht GMM distributions of word vecotrs and try to code it in python | + | *use LDA to generate 10-500 dimension document vector in the rest datasets |
| − | *Use LDA to reduce the dimension of the text in r52、r8 and contrast the performance of classification
| + | *write a python code about a new algorithm about tf-idf |
| | || | | || |
| − | *Use LDA to reduce the dimension of the text in 20news and webkb | + | *debug the code |
| | |} | | |} |
| Date |
People |
Last Week |
This Week
|
| 2016/12/19
|
Yang Feng |
- s2smn: wrote the manual of s2s with tensorflow [nmt-manual]
- wrote part of the code of mn.
- wrote the manual of Moses [moses-manual]
- Huilan: fixed the problem of syntax-based translation.
- sort out the system and corresponding documents.
|
|
| Jiyuan Zhang |
- coded tone_model,but had some trouble
- run global_attention_model that decodes four sentences, fourfivegenerated by local_attention model
|
|
| Andi Zhang |
- coded to output encoder outputs and correspoding source & target sentences(ids in dictionaries)
- coded a script for bleu scoring, which tests the five checkpoints auto created by training process and save the one with best performance
|
|
| Shiyue Zhang |
- changed the one-hot vector to (0, -inf, -inf...), and retied the experiments. But no improvement showed.
- tried 1-dim gate, but converged to baseline
- tried to only train gate, but the best is taking all instance as "right"
- trying a model similar to attention
- [report]
|
- try to add true action info when training gate
- try different scale vectors
- try to change cos to only inner product
|
| Guli |
- read papers about Transfer learning and solving OOV
- conducted comparative test
- writing survey
|
- complete the first draft of the survey
|
| Peilun Xiao |
- use LDA to generate 10-500 dimension document vector in the rest datasets
- write a python code about a new algorithm about tf-idf
|
|