“NLP Status Report 2016-11-21”版本间的差异
来自cslt Wiki
第24行: | 第24行: | ||
|Shiyue Zhang || | |Shiyue Zhang || | ||
* run rnng on MKL successfully, which can double or triple the speed. | * run rnng on MKL successfully, which can double or triple the speed. | ||
− | * rerun the original model and get the final result | + | * rerun the original model and get the final result 92.32 |
* rerun the wrong memory models, still running | * rerun the wrong memory models, still running | ||
− | * implement the dynamic memory model and get the result which is 0.22 better than baseline | + | * implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline |
* try another structure of memory | * try another structure of memory | ||
|| | || |
2016年11月21日 (一) 00:49的版本
Date | People | Last Week | This Week |
---|---|---|---|
2016/11/21 | Yang Feng | ||
Jiyuan Zhang | |||
Andi Zhang |
|
| |
Shiyue Zhang |
|
| |
Guli |