“Vivi-poem-generation”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
Lty讨论 | 贡献
 
(1位用户的5个中间修订版本未显示)
第1行: 第1行:
 
=薇薇:会写诗的机器人=
 
=薇薇:会写诗的机器人=
 +
 +
成员:王东,王琪鑫,骆天一,张纪袁,冯洋
  
 
==vivi 3.0 (on going) ==
 
==vivi 3.0 (on going) ==
第39行: 第41行:
 
===论文===
 
===论文===
  
* [http://wangd.cslt.org/public/pdf/aclpoem.pdf Creative generation of poems]
+
* [https://arxiv.org/abs/1705.03773 Creative generation of poems]
  
 
==vivi 1.0==
 
==vivi 1.0==
第61行: 第63行:
 
* [https://arxiv.org/abs/1604.06274|Chinese Song Iambics Generation with Neural Attention-based Model, IJCAI2016]
 
* [https://arxiv.org/abs/1604.06274|Chinese Song Iambics Generation with Neural Attention-based Model, IJCAI2016]
  
* [http://link.springer.com/chapter/10.1007/978-3-319-49685-6_4/fulltext.html|Springer, LNCS, vol 10023, pp.171-183.]
+
* [http://link.springer.com/chapter/10.1007/978-3-319-49685-6_4/fulltext.html|Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test, Springer, LNCS, vol 10023, pp.171-183.]
 +
 
 +
* [https://arxiv.org/abs/1705.03773 Jiyuan Zhang, Yang Feng, Dong Wang, Yang Wang, Andrew Abel, Shiyue Zhang, Andi Zhangi, "Flexible and Creative Chinese Poetry Generation Using Neural Memory"]
 +
 
 +
 
 +
===文章===
 +
 
 +
[[Wangd-wiki-article-vvpoem|薇薇的故事]]

2018年7月24日 (二) 17:05的最后版本

薇薇:会写诗的机器人

成员:王东,王琪鑫,骆天一,张纪袁,冯洋

vivi 3.0 (on going)

目标

  • Transfer modern sentences to poems
  • Utilize extra knowledge to boost innovation
  • Reinforcement learning to improve quality


vivi 2.0

基本方法

  • Tensorflow 实现
  • Attention-based LSTM/GRU S2S
  • Sampling words as input to generate the present sentence
  • Memory augmentation (global and local)
  • Local attention for theme (+)
  • Local attention on previous generation, with couplet assignment (line number?) (+)
  • N-best decoding (+)

实现细节

  • Rythms with less characters removed
  • Characters seldom used as rhythms words are removed
  • Characters that are low-frequency are removed

特性

  • 训练基础模型,用memory实现精细创新
  • 用memory可实现风格、体例转换
  • 用Local attention可实现人为指导创作(+)
  • 可实现律诗中的对仗

测试结果

论文

vivi 1.0

基本方法

  • Theano 实现
  • 基于sequence-to-sequence的LSTM/GRU模型, 运用Attention 机制。
  • 输入为一首诗的第一句,输出为后面所有句子
  • 预训练word vectors,用多种体例古文结合在一起训练
  • 生成时可对用户输入进行扩展


测试结果

论文


文章

薇薇的故事