“NLP Status Report 2016-11-21”版本间的差异
来自cslt Wiki
(以“{| class="wikitable" !Date !! People !! Last Week !! This Week |- | rowspan="5"|2016/11/21 |Yang Feng || || |- |Jiyuan Zhang || || |- |Andi Zhang || || |- |S...”为内容创建页面) |
|||
(5位用户的22个中间修订版本未显示) | |||
第4行: | 第4行: | ||
| rowspan="5"|2016/11/21 | | rowspan="5"|2016/11/21 | ||
|Yang Feng || | |Yang Feng || | ||
− | + | *rnng+mn | |
+ | 1) ran experiments of rnng+mn [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]] ; | ||
+ | 2) used top-k for memory, under training | ||
+ | *sequence-to-sequence + mn | ||
+ | 1) wrote the proposal | ||
+ | 2) discussed the details with Andy | ||
+ | *intern interview | ||
+ | *Huilan's work | ||
|| | || | ||
− | + | *rnng+mn | |
+ | 1) get the result of top-k; 2) try bigger memory; | ||
+ | *sequence-to-sequence + mn | ||
+ | 1)coding work | ||
+ | *Huilan's work | ||
+ | 1) try syntax-based TM | ||
|- | |- | ||
|Jiyuan Zhang || | |Jiyuan Zhang || | ||
− | + | *ran decoder-memory model, but the effect is not obvious | |
+ | *changed binding way of memory and atten models, can generate different style of poetry | ||
+ | *cleanned up my code | ||
+ | *wrote a techreport about poemGen [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Atten-memory-poetry.pdf] | ||
+ | *submitted two databases about poemGen and musicGen | ||
|| | || | ||
− | + | *explored a variety of binding ways of memory and atten model | |
|- | |- | ||
|Andi Zhang || | |Andi Zhang || | ||
− | + | *prepare new data set for paraphrase, wiped out repetition & most of the noises | |
− | + | *run NMT on fr-en data set and new paraphrase set | |
+ | *read through source code to find ways to modify it | ||
+ | *helped Guli with running NMT on our server | ||
|| | || | ||
− | + | *decide to drop theano or not | |
+ | *start to work on codes | ||
|- | |- | ||
|Shiyue Zhang || | |Shiyue Zhang || | ||
− | + | * run rnng on MKL successfully, which can double or triple the speed. Revised the RNNG User Guide. | |
+ | * rerun the original model and get the final result 92.32 | ||
+ | * rerun the wrong memory models, still running | ||
+ | * implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline | ||
+ | * try another structure of memory [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]] | ||
|| | || | ||
− | + | * try more different models and summary the results | |
+ | * publish the technical reports | ||
|- | |- | ||
|Guli || | |Guli || | ||
− | + | * read the paper "NMT by jointly learning to align and translate" | |
+ | * read the codes of paper and ran NMT (cs-en) on GPU with the help of Andi | ||
+ | * learn more about python | ||
+ | * prepare data for Ontology Library | ||
|| | || | ||
− | + | * continue to prepare the data | |
+ | * follow the teacher Yang's instructions | ||
|} | |} |
2016年11月21日 (一) 01:41的最后版本
Date | People | Last Week | This Week |
---|---|---|---|
2016/11/21 | Yang Feng |
1) ran experiments of rnng+mn [report] ; 2) used top-k for memory, under training
1) wrote the proposal 2) discussed the details with Andy
|
1) get the result of top-k; 2) try bigger memory;
1)coding work
1) try syntax-based TM |
Jiyuan Zhang |
|
| |
Andi Zhang |
|
| |
Shiyue Zhang |
|
| |
Guli |
|
|