“NLP Status Report 2016-12-19”版本间的差异
来自cslt Wiki
(以“{| class="wikitable" !Date !! People !! Last Week !! This Week |- | rowspan="6"|2016/12/12 |Yang Feng || *s2smn: installed tensorflow and ran a toy example (solv...”为内容创建页面) |
|||
(6位用户的12个中间修订版本未显示) | |||
第2行: | 第2行: | ||
!Date !! People !! Last Week !! This Week | !Date !! People !! Last Week !! This Week | ||
|- | |- | ||
− | | rowspan="6"|2016/12/ | + | | rowspan="6"|2016/12/19 |
|Yang Feng || | |Yang Feng || | ||
− | *[[s2smn:]] | + | *[[s2smn:]] wrote the manual of s2s with tensorflow [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/51/Nmt-tensorflow-mannua-yfeng.pdf nmt-manual]] |
− | *wrote the code of the | + | *wrote part of the code of mn. |
− | *[[Huilan:]] | + | *wrote the manual of Moses [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/92/Moses%E6%93%8D%E4%BD%9C%E6%89%8B%E5%86%8C--%E5%86%AF%E6%B4%8B.pdf moses-manual]] |
+ | *[[Huilan:]] fixed the problem of syntax-based translation. | ||
+ | *sort out the system and corresponding documents. | ||
|| | || | ||
− | *[[s2smn:]] finish the | + | *[[s2smn:]] finish the code of adding mn. |
− | *[[Huilan:]] | + | *[[Huilan:]] handover. |
|- | |- | ||
|Jiyuan Zhang || | |Jiyuan Zhang || | ||
− | * | + | *coded tone_model,but had some trouble |
− | * | + | *run global_attention_model that decodes four sentences, [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/d5/Four_local_atten.pdf four][http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/05/Five_local_attention.pdf five]generated by local_attention model |
− | + | ||
− | + | ||
|| | || | ||
*improve poem model | *improve poem model | ||
|- | |- | ||
|Andi Zhang || | |Andi Zhang || | ||
− | * | + | *coded to output encoder outputs and correspoding source & target sentences(ids in dictionaries) |
− | * | + | *coded a script for bleu scoring, which tests the five checkpoints auto created by training process and save the one with best performance |
|| | || | ||
− | * | + | *extract encoder outputs |
|- | |- | ||
|Shiyue Zhang || | |Shiyue Zhang || | ||
− | * | + | * changed the one-hot vector to (0, -inf, -inf...), and retied the experiments. But no improvement showed. |
− | * tried | + | * tried 1-dim gate, but converged to baseline |
− | * | + | * tried to only train gate, but the best is taking all instance as "right" |
− | * | + | * trying a model similar to attention |
− | * [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/ | + | * [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9f/RNNG%2Bmm_experiment_report.pdf report]] |
|| | || | ||
− | * | + | * try to add true action info when training gate |
− | * try | + | * try different scale vectors |
− | * try | + | * try to change cos to only inner product |
|- | |- | ||
|Guli || | |Guli || | ||
− | * | + | *read papers about Transfer learning and solving OOV |
− | * | + | *conducted comparative test |
+ | *writing survey | ||
|| | || | ||
− | * | + | * complete the first draft of the survey |
|- | |- | ||
|Peilun Xiao || | |Peilun Xiao || | ||
− | * | + | *use LDA to generate 10-500 dimension document vector in the rest datasets |
− | + | *write a python code about a new algorithm about tf-idf | |
|| | || | ||
− | * | + | *debug the code |
|} | |} |
2016年12月26日 (一) 00:58的最后版本
Date | People | Last Week | This Week |
---|---|---|---|
2016/12/19 | Yang Feng |
|
|
Jiyuan Zhang |
| ||
Andi Zhang |
|
| |
Shiyue Zhang |
|
| |
Guli |
|
| |
Peilun Xiao |
|
|