|
|
第4行: |
第4行: |
| | rowspan="6"|2017/7/3 | | | rowspan="6"|2017/7/3 |
| |Jiyuan Zhang || | | |Jiyuan Zhang || |
− | *made the poster for ACL [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/95/Acl2017-poster.pdf] | + | *generated streame according to a couplet |
− | *attempted to fix repeated word, but failed
| + | *almost completed the task of filling in the blanks of a couplet |
− | *done some work of n-gram model of the couplet | + | |
| || | | || |
− | *generate streame according to a couplet | + | *continue to perfect the couplet model |
− | *complete the task of filling in the blanks of a couplet
| + | |
− | | + | |
| |- | | |- |
| |Aodong LI || | | |Aodong LI || |
Date |
People |
Last Week |
This Week
|
2017/7/3
|
Jiyuan Zhang |
- generated streame according to a couplet
- almost completed the task of filling in the blanks of a couplet
|
- continue to perfect the couplet model
|
Aodong LI |
- Got 55,000+ Englsih poems and 260,000+ lines after preprocessing
- Added phase separators as the style indicator, and every line has at least one separator
- Training loss didn't decrease very much, only from 440 to 50
- The translation quality deteriorated when added language model
|
- Try to use a larger language model to decrease the training loss
- Try to use character-based MT in English-Chinese translation
|
Shiyue Zhang |
|
|
Shipan Ren |
- looked for the performance(the bleu value) of other models
on the WMT2014 dataset from the published papers,but not found.
- installed and built Moses on the server
|
- train statistical machine translation model and test it
toolkit: Moses
data sets:WMT2014 en-de、en-fr data sets
- collate experimental results.compare our baseline model with Moses
|
Jiayu Guo |
- process document.Until now, Shiji has been split up to 2,4000 pairs of sentence.
- Zizhitongjian has been split up to 1,6000 pairs.
|
- adjust jieba source code, in order to make jieba more accurate for ancient language wordpiece
- read model source code
|