|
|
| 第33行: |
第33行: |
| | |- | | |- |
| | |Shiyue Zhang || | | |Shiyue Zhang || |
| − | | + | * try rnng on GPU |
| | + | * read the code of Feng |
| | + | * modify model |
| | + | [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf] |
| | || | | || |
| − | | + | * try MKL |
| | + | * modify model |
| | |- | | |- |
| | |} | | |} |
| Date |
People |
Last Week |
This Week
|
| 2016/10/31
|
Yang Feng |
- added new features to rnng+mn, including automatically detecting wrong sentences, swapping memories more frequently and filtering memory units to speed up.
- ran experiments for rnng+mn [report]
- read the code of sequence-to-sequence with tensorflow
- recruited interns
- Huilan work summary
|
- optimize rnng+MN;
- discuss the code with Jiyuan;
- work with Andy at NMT;
- Intern interviews
- Huilan work.
|
| Jiyuan Zhang |
- checked previous code about encoder-memory
- completed code about decoder-memory,running
|
- continue to modify memory model
- read some related papers
|
| Andi Zhang |
- ran NMT (cs-en) on gpu, but bleu is low, could be resulting from a small corpus
- ran NMT on paraphrase data set
- wrote MemN2N ducument
|
- run NMT (fr-en) to get a bleu as that in the paper
- run paraphrase for validation
|
| Shiyue Zhang |
- try rnng on GPU
- read the code of Feng
- modify model
[1]
|
|