“Xingchao work”版本间的差异
来自cslt Wiki
| 第9行: | 第9行: | ||
Japanese-Spanish Thesaurus Construction Using English as a Pivot[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e8/Japanese-Spanish_Thesaurus_Construction.pdf] | Japanese-Spanish Thesaurus Construction Using English as a Pivot[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e8/Japanese-Spanish_Thesaurus_Construction.pdf] | ||
| + | |||
| + | ==Chaos Work== | ||
| + | |||
| + | ===SSA Model=== | ||
| + | |||
| + | Build 2-dimension SSA-Model. | ||
| + | Start at : 2014-09-30 | ||
| + | |||
| + | ===SEMPRE Research=== | ||
| + | |||
| + | Download SEMPRE toolkit | ||
| + | Start at : 2014-09-30 | ||
| + | |||
| + | ===Knowledge Vector=== | ||
| + | |||
| + | Pre-process corpus. | ||
| + | Start at : 2014-09-30 | ||
| + | |||
| + | ===Moses translation model=== | ||
| + | |||
| + | Pre-process corpus, remove the sentence which contains rarely seen words. | ||
| + | Start at : 2014-09-30 | ||
2014年9月30日 (二) 06:57的版本
目录
Paper Recommendation
Pre-Trained Multi-View Word Embedding.[1]
Learning Word Representation Considering Proximity and Ambiguity.[2]
Continuous Distributed Representations of Words as Input of LSTM Network Language Model.[3]
WikiRelate! Computing Semantic Relatedness Using Wikipedia.[4]
Japanese-Spanish Thesaurus Construction Using English as a Pivot[5]
Chaos Work
SSA Model
Build 2-dimension SSA-Model. Start at : 2014-09-30
SEMPRE Research
Download SEMPRE toolkit Start at : 2014-09-30
Knowledge Vector
Pre-process corpus. Start at : 2014-09-30
Moses translation model
Pre-process corpus, remove the sentence which contains rarely seen words. Start at : 2014-09-30