|
|
第18行: |
第18行: |
| |- | | |- |
| |Jiyuan Zhang || | | |Jiyuan Zhang || |
− | *read three related paper to find some ideas <br/> [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3e/1410.5401v2.pdf neural turing machine]<br/>[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/54/1508.06576v1.pdf A Neural Algorithm of Artistic Style]<br/>[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/83/Dedaff23ad393c48fe7b7989542318a02dc0a06e.pdf Generating Long and Diverse Responses with Neural Conversation Models] | + | *restructured code |
− | | + | *found the cause of cost randomness |
− | * polished TRP [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c4/Memory-atten-model-public.pdf] | + | *modified memory weight,ran expriment |
| + | *read a paper |
| + | *simply expain my code to Miss Feng |
| + | *discussed with liantian about the way of using tensorflow to realize his idea |
| || | | || |
| *improve poem model | | *improve poem model |
Date |
People |
Last Week |
This Week
|
2016/12/05
|
Yang Feng |
- rnng+MN: got the result of k-means method and the result is slightly worse;
- fixed the bug;
- analyzed the memory units and changed the calculation of similarity and reran.
- S2S+MN: read the code and discuss with andy about the implementation details;
- checked Wikianswers data and found the answers are usually much longer than the question;
- read 12 QA-related papers in proceedings of ACL16 and EMNLP16 and haven't found proper dataset yet.
- Huilan's work: got a version of better result focusing on syntactical transformation.
|
- rnng+MN: get the result with new similarity calculation.
- S2S+MN: revise the code of tensorflow to make it equivalent to theano's.
- poetry: review the code of Jiyuan
- Huilan's work: continue the work of adding syntactic information.
|
Jiyuan Zhang |
- restructured code
- found the cause of cost randomness
- modified memory weight,ran expriment
- read a paper
- simply expain my code to Miss Feng
- discussed with liantian about the way of using tensorflow to realize his idea
|
|
Andi Zhang |
- deal with zh2en data set and ran them on NTM
- had a small breakthrough about the code
|
- get output of encoder to form memory
- continue on the coding work of seq2seq with MemN2N
|
Shiyue Zhang |
- found a bug in my code and modified it.
- tried memory with gate and found a big problem of memory.
- reran previous models, the results are not better than baseline. [report]
- reran the original model setting same seed, and got exactly same result.
- published a TRP [1]
|
- try to solve the problem of mem
|
Guli |
- busy on nothing for the first two days of the week.
- modify the code and run NMT on fr-en data set
- modify the code and run NMT on ch-uy data set
- writing a survey about Chinese-uyghur MT
|
|