“NLP Status Report 2016-11-21”版本间的差异
来自cslt Wiki
(2位用户的2个中间修订版本未显示) | |||
第39行: | 第39行: | ||
|- | |- | ||
|Shiyue Zhang || | |Shiyue Zhang || | ||
− | * run rnng on MKL successfully, which can double or triple the speed. | + | * run rnng on MKL successfully, which can double or triple the speed. Revised the RNNG User Guide. |
* rerun the original model and get the final result 92.32 | * rerun the original model and get the final result 92.32 | ||
* rerun the wrong memory models, still running | * rerun the wrong memory models, still running | ||
* implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline | * implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline | ||
− | * try another structure of memory | + | * try another structure of memory [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]] |
|| | || | ||
* try more different models and summary the results | * try more different models and summary the results | ||
第49行: | 第49行: | ||
|- | |- | ||
|Guli || | |Guli || | ||
− | + | * read the paper "NMT by jointly learning to align and translate" | |
+ | * read the codes of paper and ran NMT (cs-en) on GPU with the help of Andi | ||
+ | * learn more about python | ||
+ | * prepare data for Ontology Library | ||
|| | || | ||
− | + | * continue to prepare the data | |
+ | * follow the teacher Yang's instructions | ||
|} | |} |
2016年11月21日 (一) 01:41的最后版本
Date | People | Last Week | This Week |
---|---|---|---|
2016/11/21 | Yang Feng |
1) ran experiments of rnng+mn [report] ; 2) used top-k for memory, under training
1) wrote the proposal 2) discussed the details with Andy
|
1) get the result of top-k; 2) try bigger memory;
1)coding work
1) try syntax-based TM |
Jiyuan Zhang |
|
| |
Andi Zhang |
|
| |
Shiyue Zhang |
|
| |
Guli |
|
|