“NLP Status Report 2016-11-21”版本间的差异
来自cslt Wiki
| 第43行: | 第43行: | ||
* rerun the wrong memory models, still running | * rerun the wrong memory models, still running | ||
* implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline | * implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline | ||
| − | * try another structure of memory | + | * try another structure of memory [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]] |
|| | || | ||
* try more different models and summary the results | * try more different models and summary the results | ||
2016年11月21日 (一) 01:33的版本
| Date | People | Last Week | This Week |
|---|---|---|---|
| 2016/11/21 | Yang Feng |
1) ran experiments of rnng+mn [report] ; 2) used top-k for memory, under training
1) wrote the proposal 2) discussed the details with Andy
|
1) get the result of top-k; 2) try bigger memory;
1)coding work
1) try syntax-based TM |
| Jiyuan Zhang |
|
| |
| Andi Zhang |
|
| |
| Shiyue Zhang |
|
| |
| Guli |