“NLP Status Report 2016-11-21”版本间的差异
来自cslt Wiki
| (某位用户的一个中间修订版本未显示) | |||
| 第39行: | 第39行: | ||
|- | |- | ||
|Shiyue Zhang || | |Shiyue Zhang || | ||
| − | * run rnng on MKL successfully, which can double or triple the speed. | + | * run rnng on MKL successfully, which can double or triple the speed. Revised the RNNG User Guide. |
* rerun the original model and get the final result 92.32 | * rerun the original model and get the final result 92.32 | ||
* rerun the wrong memory models, still running | * rerun the wrong memory models, still running | ||
| 第49行: | 第49行: | ||
|- | |- | ||
|Guli || | |Guli || | ||
| − | + | * read the paper "NMT by jointly learning to align and translate" | |
| + | * read the codes of paper and ran NMT (cs-en) on GPU with the help of Andi | ||
| + | * learn more about python | ||
| + | * prepare data for Ontology Library | ||
|| | || | ||
| − | + | * continue to prepare the data | |
| + | * follow the teacher Yang's instructions | ||
|} | |} | ||
2016年11月21日 (一) 01:41的最后版本
| Date | People | Last Week | This Week |
|---|---|---|---|
| 2016/11/21 | Yang Feng |
1) ran experiments of rnng+mn [report] ; 2) used top-k for memory, under training
1) wrote the proposal 2) discussed the details with Andy
|
1) get the result of top-k; 2) try bigger memory;
1)coding work
1) try syntax-based TM |
| Jiyuan Zhang |
|
| |
| Andi Zhang |
|
| |
| Shiyue Zhang |
|
| |
| Guli |
|
|