|
|
| 第4行: |
第4行: |
| | | rowspan="5"|2016/11/21 | | | rowspan="5"|2016/11/21 |
| | |Yang Feng || | | |Yang Feng || |
| − | | + | * ran experiments of rnng+mn [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f8/Progress_of_RNNG_with_memory_network.pdf report]] \\ |
| | + | --used top-k for memory, under training |
| | + | * wrote a proposal for sequence-to-sequence+mn |
| | || | | || |
| | | | |
| Date |
People |
Last Week |
This Week
|
| 2016/11/21
|
Yang Feng |
- ran experiments of rnng+mn [report] \\
--used top-k for memory, under training
- wrote a proposal for sequence-to-sequence+mn
|
|
| Jiyuan Zhang |
|
|
| Andi Zhang |
- prepare new data set for paraphrase, wiped out repetition & most of the noises
- run NMT on fr-en data set and new paraphrase set
- read through source code to find ways to modify it
- helped Guli with running NMT on our server
|
- decide to drop theano or not
- start to work on codes
|
| Shiyue Zhang |
- run rnng on MKL successfully, which can double or triple the speed.
- rerun the original model and get the final result 92.32
- rerun the wrong memory models, still running
- implement the dynamic memory model and get the result 92.54 which is 0.22 better than baseline
- try another structure of memory
|
- try more different models and summary the results
- publish the technical reports
|
| Guli |
|
|