<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://index.cslt.org/mediawiki/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="zh-cn">
		<id>http://index.cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lty</id>
		<title>cslt Wiki - 用户贡献 [zh-cn]</title>
		<link rel="self" type="application/atom+xml" href="http://index.cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lty"/>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E7%89%B9%E6%AE%8A:%E7%94%A8%E6%88%B7%E8%B4%A1%E7%8C%AE/Lty"/>
		<updated>2026-04-07T11:18:56Z</updated>
		<subtitle>用户贡献</subtitle>
		<generator>MediaWiki 1.23.3</generator>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Vivi-poem-generation</id>
		<title>Vivi-poem-generation</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Vivi-poem-generation"/>
				<updated>2018-07-24T17:05:02Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=薇薇：会写诗的机器人=&lt;br /&gt;
&lt;br /&gt;
成员：王东，王琪鑫，骆天一，张纪袁，冯洋&lt;br /&gt;
&lt;br /&gt;
==vivi 3.0 (on going) ==&lt;br /&gt;
&lt;br /&gt;
===目标===&lt;br /&gt;
&lt;br /&gt;
* Transfer modern sentences to poems&lt;br /&gt;
* Utilize extra knowledge to boost innovation&lt;br /&gt;
* Reinforcement learning to improve quality&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==vivi 2.0==&lt;br /&gt;
&lt;br /&gt;
===基本方法===&lt;br /&gt;
&lt;br /&gt;
* Tensorflow 实现&lt;br /&gt;
* Attention-based LSTM/GRU S2S&lt;br /&gt;
* Sampling words as input to generate the present sentence&lt;br /&gt;
* Memory augmentation (global and local)&lt;br /&gt;
* Local attention for theme (+)&lt;br /&gt;
* Local attention on previous generation, with couplet assignment （line number?） (+)&lt;br /&gt;
* N-best decoding (+)&lt;br /&gt;
&lt;br /&gt;
===实现细节===&lt;br /&gt;
&lt;br /&gt;
* Rythms with less characters removed&lt;br /&gt;
* Characters seldom used as rhythms words are removed&lt;br /&gt;
* Characters that are low-frequency are removed &lt;br /&gt;
&lt;br /&gt;
===特性===&lt;br /&gt;
&lt;br /&gt;
* 训练基础模型，用memory实现精细创新&lt;br /&gt;
* 用memory可实现风格、体例转换&lt;br /&gt;
* 用Local attention可实现人为指导创作（+）&lt;br /&gt;
* 可实现律诗中的对仗&lt;br /&gt;
&lt;br /&gt;
===测试结果===&lt;br /&gt;
&lt;br /&gt;
===论文===&lt;br /&gt;
&lt;br /&gt;
* [https://arxiv.org/abs/1705.03773 Creative generation of poems]&lt;br /&gt;
&lt;br /&gt;
==vivi 1.0==&lt;br /&gt;
&lt;br /&gt;
===基本方法===&lt;br /&gt;
&lt;br /&gt;
* Theano 实现&lt;br /&gt;
* 基于sequence-to-sequence的LSTM/GRU模型, 运用Attention 机制。&lt;br /&gt;
* 输入为一首诗的第一句，输出为后面所有句子&lt;br /&gt;
* 预训练word vectors，用多种体例古文结合在一起训练&lt;br /&gt;
* 生成时可对用户输入进行扩展&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===测试结果===&lt;br /&gt;
&lt;br /&gt;
* [[中国古诗词图灵测试|vivi 1.0 图灵测试结果]]&lt;br /&gt;
&lt;br /&gt;
===论文===&lt;br /&gt;
&lt;br /&gt;
* [https://arxiv.org/abs/1604.06274|Chinese Song Iambics Generation with Neural Attention-based Model, IJCAI2016]&lt;br /&gt;
&lt;br /&gt;
* [http://link.springer.com/chapter/10.1007/978-3-319-49685-6_4/fulltext.html|Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test, Springer, LNCS, vol 10023, pp.171-183.]&lt;br /&gt;
&lt;br /&gt;
* [https://arxiv.org/abs/1705.03773 Jiyuan Zhang, Yang Feng, Dong Wang, Yang Wang, Andrew Abel, Shiyue Zhang, Andi Zhangi, &amp;quot;Flexible and Creative Chinese Poetry Generation Using Neural Memory&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===文章===&lt;br /&gt;
&lt;br /&gt;
[[Wangd-wiki-article-vvpoem|薇薇的故事]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo</id>
		<title>Tianyi Luo</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo"/>
				<updated>2017-08-15T02:38:43Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Tainyi Luo(骆天一)'''&lt;br /&gt;
&lt;br /&gt;
'''Education:'''&lt;br /&gt;
&lt;br /&gt;
Master: Peking University(北京大学) (2013).&lt;br /&gt;
&lt;br /&gt;
'''Work experience：'''&lt;br /&gt;
* 2014.12-: Machine Learning/Natural Language Processing Research Engineer in CSLT, RIIT, Tsinghua Univ., China.   (清华大学)  (Advisor: [http://wangd.cslt.org/ Dong Wang]])&lt;br /&gt;
&lt;br /&gt;
'''Research interests:''' &lt;br /&gt;
&lt;br /&gt;
Machine Learning, Natural Language Processing, Information Retrieval and Recommend System&lt;br /&gt;
&lt;br /&gt;
'''Publications([http://lty.cslt.org/ Tianyi Luo's Homepage]):''' &lt;br /&gt;
#Tianyi Luo*, Qixin Wang*, Dong Wang. “Chinese Song Iambics Generation with Neural Attention-based Model”, The 30th AAAI Conference on Artificial Intelligence (AAAI 2016), full paper submitted. (*: equal contribution) [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/34/Songci.pdf Paper]]&lt;br /&gt;
#Tianyi Luo, Dong Wang, Rong Liu and Yiqiao Pan, &amp;quot;Stochastic Top-k ListNet&amp;quot;, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015 long oral paper) , pp. 676-684, Sep 17-21, 2015. Lisbon, Portugal.  [[http://www.aclweb.org/anthology/D/D15/D15-1079.pdf Paper]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/00/Emnlp2015longslides.pdf PPT]] [[https://github.com/pkuluotianyi/topKStoListNet Code]]&lt;br /&gt;
#Dongxu Zhang, Tianyi Luo, Rong Liu, Dong Wang. “Learning from LDA using Deep Neural Networks”, arXiv: 1508.01011  [[http://arxiv.org/pdf/1508.01011.pdf Paper]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/ec/Emnlp2015shortslides.pdf PPT]] [[http://pan.baidu.com/s/1i3Ek35b Code]]&lt;br /&gt;
&lt;br /&gt;
'''Related documents'''&lt;br /&gt;
*Tianyi Luo's Moses installing and training process [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/fe/2016-01-03_Moses%E5%AE%89%E8%A3%85%E8%AE%AD%E7%BB%83%E5%85%A8%E8%BF%87%E7%A8%8B.pdf pdf]] 01/04/2016&lt;br /&gt;
*Tianyi Luo's yearly report(01/2015~12/2015) [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/4c/2016-01-06_Tianyi_Luo%27s_yearly_report.pdf pdf]] 01/06/2016&lt;br /&gt;
*Intellegent QA System Overview [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/7d/Cslt-trp-template.pdf pdf]] 03/06/2016&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo</id>
		<title>Tianyi Luo</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo"/>
				<updated>2017-08-15T02:38:12Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Tainyi Luo(骆天一)'''&lt;br /&gt;
&lt;br /&gt;
'''Education:'''&lt;br /&gt;
&lt;br /&gt;
Master: Peking University(北京大学) (2013).&lt;br /&gt;
&lt;br /&gt;
'''Work experience：'''&lt;br /&gt;
* 2014.5-2014.9: Research Assistant in Natural Language Processing, The Hong Kong Polytechnic University, Hong Kong, China (香港理工大学)   (Advisor: [http://www4.comp.polyu.edu.hk/~cswjli/ Wenjie Li]])&lt;br /&gt;
* 2014.12-: Machine Learning/Natural Language Processing Research Engineer in CSLT, RIIT, Tsinghua Univ., China.   (清华大学)  (Advisor: [http://wangd.cslt.org/ Dong Wang]])&lt;br /&gt;
&lt;br /&gt;
'''Research interests:''' &lt;br /&gt;
&lt;br /&gt;
Machine Learning, Natural Language Processing, Information Retrieval and Recommend System&lt;br /&gt;
&lt;br /&gt;
'''Publications([http://lty.cslt.org/ Tianyi Luo's Homepage]):''' &lt;br /&gt;
#Tianyi Luo*, Qixin Wang*, Dong Wang. “Chinese Song Iambics Generation with Neural Attention-based Model”, The 30th AAAI Conference on Artificial Intelligence (AAAI 2016), full paper submitted. (*: equal contribution) [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/34/Songci.pdf Paper]]&lt;br /&gt;
#Tianyi Luo, Dong Wang, Rong Liu and Yiqiao Pan, &amp;quot;Stochastic Top-k ListNet&amp;quot;, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015 long oral paper) , pp. 676-684, Sep 17-21, 2015. Lisbon, Portugal.  [[http://www.aclweb.org/anthology/D/D15/D15-1079.pdf Paper]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/00/Emnlp2015longslides.pdf PPT]] [[https://github.com/pkuluotianyi/topKStoListNet Code]]&lt;br /&gt;
#Dongxu Zhang, Tianyi Luo, Rong Liu, Dong Wang. “Learning from LDA using Deep Neural Networks”, arXiv: 1508.01011  [[http://arxiv.org/pdf/1508.01011.pdf Paper]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/ec/Emnlp2015shortslides.pdf PPT]] [[http://pan.baidu.com/s/1i3Ek35b Code]]&lt;br /&gt;
&lt;br /&gt;
'''Related documents'''&lt;br /&gt;
*Tianyi Luo's Moses installing and training process [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/fe/2016-01-03_Moses%E5%AE%89%E8%A3%85%E8%AE%AD%E7%BB%83%E5%85%A8%E8%BF%87%E7%A8%8B.pdf pdf]] 01/04/2016&lt;br /&gt;
*Tianyi Luo's yearly report(01/2015~12/2015) [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/4c/2016-01-06_Tianyi_Luo%27s_yearly_report.pdf pdf]] 01/06/2016&lt;br /&gt;
*Intellegent QA System Overview [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/7d/Cslt-trp-template.pdf pdf]] 03/06/2016&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/News-20161008</id>
		<title>News-20161008</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/News-20161008"/>
				<updated>2016-10-08T06:08:32Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members left CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Caixia Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Diplomacy &lt;br /&gt;
&lt;br /&gt;
===Maoning Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Central Economics and Finance &lt;br /&gt;
&lt;br /&gt;
===[http://lty.cslt.org/ Tianyi Luo]=== &lt;br /&gt;
&lt;br /&gt;
Research engineer, left for University of California, Santa Cruz&lt;br /&gt;
&lt;br /&gt;
[[文件:10.pic.jpg|200px]]&lt;br /&gt;
&lt;br /&gt;
===Xiangyu Zeng=== &lt;br /&gt;
&lt;br /&gt;
Visiting students, left for University of Columbia &lt;br /&gt;
&lt;br /&gt;
===Yiqiao Pan=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of Sydney&lt;br /&gt;
&lt;br /&gt;
===Qixin Wang=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of South California&lt;br /&gt;
&lt;br /&gt;
==New members joined CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Yang Feng=== &lt;br /&gt;
Special visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yang Wang=== &lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Financial processing&lt;br /&gt;
&lt;br /&gt;
===Xingliang Cheng=== &lt;br /&gt;
&lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyue Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jiyuan Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ying Shi===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyao LI===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yixang Chen===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Aodong Li===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jingyi Lin===&lt;br /&gt;
&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/News-20161008</id>
		<title>News-20161008</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/News-20161008"/>
				<updated>2016-10-08T06:04:29Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members left CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Caixia Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Diplomacy &lt;br /&gt;
&lt;br /&gt;
===Maoning Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Central Economics and Finance &lt;br /&gt;
&lt;br /&gt;
===Tianyi Luo=== &lt;br /&gt;
&lt;br /&gt;
Research engineer, left for University of California, Santa Cruz&lt;br /&gt;
&lt;br /&gt;
[[文件:10.pic.jpg|100px]]&lt;br /&gt;
&lt;br /&gt;
===Xiangyu Zeng=== &lt;br /&gt;
&lt;br /&gt;
Visiting students, left for University of Columbia &lt;br /&gt;
&lt;br /&gt;
===Yiqiao Pan=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of Sydney&lt;br /&gt;
&lt;br /&gt;
===Qixin Wang=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of South California&lt;br /&gt;
&lt;br /&gt;
==New members joined CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Yang Feng=== &lt;br /&gt;
Special visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yang Wang=== &lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Financial processing&lt;br /&gt;
&lt;br /&gt;
===Xingliang Cheng=== &lt;br /&gt;
&lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyue Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jiyuan Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ying Shi===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyao LI===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yixang Chen===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Aodong Li===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jingyi Lin===&lt;br /&gt;
&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/News-20161008</id>
		<title>News-20161008</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/News-20161008"/>
				<updated>2016-10-08T06:03:59Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members left CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Caixia Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Diplomacy &lt;br /&gt;
&lt;br /&gt;
===Maoning Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Central Economics and Finance &lt;br /&gt;
&lt;br /&gt;
===Tianyi Luo=== &lt;br /&gt;
&lt;br /&gt;
Research engineer, left for University of California, Santa Cruz&lt;br /&gt;
&lt;br /&gt;
[[文件:http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/68/10.pic.jpg]]&lt;br /&gt;
 [[文件:10.pic.jpg|200px]]&lt;br /&gt;
&lt;br /&gt;
===Xiangyu Zeng=== &lt;br /&gt;
&lt;br /&gt;
Visiting students, left for University of Columbia &lt;br /&gt;
&lt;br /&gt;
===Yiqiao Pan=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of Sydney&lt;br /&gt;
&lt;br /&gt;
===Qixin Wang=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of South California&lt;br /&gt;
&lt;br /&gt;
==New members joined CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Yang Feng=== &lt;br /&gt;
Special visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yang Wang=== &lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Financial processing&lt;br /&gt;
&lt;br /&gt;
===Xingliang Cheng=== &lt;br /&gt;
&lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyue Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jiyuan Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ying Shi===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyao LI===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yixang Chen===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Aodong Li===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jingyi Lin===&lt;br /&gt;
&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/News-20161008</id>
		<title>News-20161008</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/News-20161008"/>
				<updated>2016-10-08T06:01:09Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members left CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Caixia Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Diplomacy &lt;br /&gt;
&lt;br /&gt;
===Maoning Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Central Economics and Finance &lt;br /&gt;
&lt;br /&gt;
===Tianyi Luo=== &lt;br /&gt;
&lt;br /&gt;
Research engineer, left for University of California, Santa Cruz&lt;br /&gt;
&lt;br /&gt;
[[文件:http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/68/10.pic.jpg]]&lt;br /&gt;
&lt;br /&gt;
===Xiangyu Zeng=== &lt;br /&gt;
&lt;br /&gt;
Visiting students, left for University of Columbia &lt;br /&gt;
&lt;br /&gt;
===Yiqiao Pan=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of Sydney&lt;br /&gt;
&lt;br /&gt;
===Qixin Wang=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of South California&lt;br /&gt;
&lt;br /&gt;
==New members joined CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Yang Feng=== &lt;br /&gt;
Special visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yang Wang=== &lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Financial processing&lt;br /&gt;
&lt;br /&gt;
===Xingliang Cheng=== &lt;br /&gt;
&lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyue Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jiyuan Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ying Shi===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyao LI===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yixang Chen===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Aodong Li===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jingyi Lin===&lt;br /&gt;
&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:10.pic.jpg</id>
		<title>文件:10.pic.jpg</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:10.pic.jpg"/>
				<updated>2016-10-08T06:00:21Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:9.pic.jpg</id>
		<title>文件:9.pic.jpg</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:9.pic.jpg"/>
				<updated>2016-10-08T05:58:11Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/News-20161008</id>
		<title>News-20161008</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/News-20161008"/>
				<updated>2016-10-08T05:55:05Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：/* Members left CSLT recently */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members left CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Caixia Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Diplomacy &lt;br /&gt;
&lt;br /&gt;
===Maoning Wang===&lt;br /&gt;
&lt;br /&gt;
Special Visitor, left for University of Central Economics and Finance &lt;br /&gt;
&lt;br /&gt;
===Tianyi Luo=== &lt;br /&gt;
&lt;br /&gt;
Research engineer, left for University of California, Santa Cruz&lt;br /&gt;
&lt;br /&gt;
===Xiangyu Zeng=== &lt;br /&gt;
&lt;br /&gt;
Visiting students, left for University of Columbia &lt;br /&gt;
&lt;br /&gt;
===Yiqiao Pan=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of Sydney&lt;br /&gt;
&lt;br /&gt;
===Qixin Wang=== &lt;br /&gt;
&lt;br /&gt;
Visiting student, left for University of South California&lt;br /&gt;
&lt;br /&gt;
==New members joined CSLT recently==&lt;br /&gt;
&lt;br /&gt;
===Yang Feng=== &lt;br /&gt;
Special visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yang Wang=== &lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Financial processing&lt;br /&gt;
&lt;br /&gt;
===Xingliang Cheng=== &lt;br /&gt;
&lt;br /&gt;
Student&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyue Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jiyuan Zhang===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ying Shi===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
===Shiyao LI===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
===Yixang Chen===&lt;br /&gt;
Intern&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Aodong Li===&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
NLP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jingyi Lin===&lt;br /&gt;
&lt;br /&gt;
Visitor&lt;br /&gt;
&lt;br /&gt;
Speech processing&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Publication-trp</id>
		<title>Publication-trp</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Publication-trp"/>
				<updated>2016-06-20T19:37:32Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[文件:Aikefu.bmp|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template.pdf|TRP-20160004: A Review of Neural QA, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:simpair.png|200px]]&lt;br /&gt;
*[[媒体文件:Trp20160003.pdf|TRP-20160003: A study of Similar Word Model for Unfrequent Word Enhancement in Speech Recognition, Xi Ma, Dong Wang and Javier Tejedor]]&lt;br /&gt;
&lt;br /&gt;
[[文件:low-freq.png|200px]]&lt;br /&gt;
*[[媒体文件:How to deal with low frequency words.pdf|TRP-20160002: Low-Frequency Words Embedding, Chao Xing, Yiqiao Pan, Dong Wang]]&lt;br /&gt;
[[文件:maxmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:Max-margin.pdf|TRP-20160001: Max-margin metric learning for speaker recognition, Lantian Li, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:lowv.png|200px]]&lt;br /&gt;
*[[媒体文件:Lowv.pdf|TRP-20150033: Learning Ordered Word Representations, Xiaoxi Wang, Chao Xing, Dong Wang, Rong Liu and Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Adamax.png|200px]]&lt;br /&gt;
*[[媒体文件:Adamax Online Training for Speech Recognition.pdf|TRP-20150032: Adamax Online Training for Speech Recognition, Xiangyu Zeng, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Ptrnets.png|200px]]&lt;br /&gt;
*[[媒体文件: Ptrnets.pdf|TRP-20150031: An implementation of Pointer-Networks with Extensions, Xiaoxi Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dvad.png|200px]]&lt;br /&gt;
*[[媒体文件:dvad.pdf|TRP-20150030: DNN-based Voice Activity Detection for Speaker Recognition, Fanhu Bie, Zhiyong Zhang, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:uyghur.jpg|200px]]&lt;br /&gt;
*[[媒体文件:urghur.pdf|TRP-20150029: THUYG-20：A Free Uyghur Speech Database, Askar Rozi, Shi Yin, Zhiyong Zhang, Dong Wang,  Askar Hamdulla]]&lt;br /&gt;
&lt;br /&gt;
[[文件:nnpre.jpg|200px]]&lt;br /&gt;
*[[媒体文件:nnpre.pdf|TRP-20150028: Knowledge Transfer Pre-training, Zhiyuang Tang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:mmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:mmargin.pdf|TRP-20150027: Max-Margin Metric Learning for Speaker Recognition, Lantian Li, Chao Xing, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:binary.jpg|200px]]&lt;br /&gt;
*[[媒体文件:binary.pdf|TRP-20150026: Binary Speaker Embedding, Lantian Li, Chao Xing, Dong Wang, Kaimin Yu, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:rnnrl.png|200px]]&lt;br /&gt;
*[[媒体文件:rnnrl.pdf|TRP-20150025: Relation Classification via Recurrent Neural Network, Dong Xu Zhang, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:dplda.png|200px]]&lt;br /&gt;
*[[媒体文件:dplda.pdf|TRP-20150024: Learning from LDA using Deep Neural Networks, Dongxu Zhang, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:jsrl.png|200px]]&lt;br /&gt;
*[[媒体文件:jsrl.pdf|TRP-20150023: Joint Semantic Relevance Learning with Text Data and Graph Knowledge, Dongxu Zhang, Bin Yuan, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:listnet.png|200px]]&lt;br /&gt;
*[[媒体文件:listnet.pdf|TRP-20150022: Stochastic Top-k ListNet, Tianyi Luo, Dong Wang, Rong Liu, Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:segvector.png|200px]]&lt;br /&gt;
*[[媒体文件:segvector.pdf|TRP-20150021: Improved Deep Speaker Feature Learning for Text-Dependent Speaker Recognition, Lantian Li, Yiye Lin, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Vmclass.png|200px]]&lt;br /&gt;
*[[媒体文件:Vmclass.pdf|TRP-20150020: Document Classification with Spherical Word Vectors, Yiqiao Pan, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Tlearn.png|200px]]&lt;br /&gt;
*[[媒体文件:Tlearn.pdf|TRP-20150019: Transfer Learning for Speech and Language Processing, Dong Wang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Songcisample.png|200px]]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/7a/Cslt20150018_revisedversion.pdf TRP-20150018:Chinese Song Iambics Generation with Neural Attention-based Model, Qixin Wang, Tianyi Luo, Dong Wang, Chao Xing]&lt;br /&gt;
&lt;br /&gt;
[[文件:database.jpg|200px]]&lt;br /&gt;
*[[媒体文件:Thuyg20-sre.pdf|TRP-20150017: AN OPEN/FREE DATABASE AND BENCHMARK FOR UYGHUR SPEAKER RECOGNITION, Askar Rozi, Dong Wang, Zhiyong Zhang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Thchs.png|200px]]&lt;br /&gt;
*[[媒体文件:Thchs30.pdf|TRP-20150016: THCHS-30 : A Free Chinese Speech Corpus, Dong Wang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Su.jpg|200px]]&lt;br /&gt;
*[[媒体文件:SUSR.pdf|TRP-20150015: Improving Short Utterance Speaker Recognition by Modeling Speech Unit Classes, Chenhao Zhang Dong Wang, Lantian Li and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dv.png|200px]]&lt;br /&gt;
*[[媒体文件:Dvector.pdf|TRP-20150014: Deep Speaker Vectors for Semi Text-independent Speaker Verification, Lantian Li, Dong Wang, Zhiyong Zhang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dark.png|200px]]&lt;br /&gt;
*[[媒体文件:Dark.pdf|TRP-20150013: Recurrent Neural Network Training with Dark Knowledge Transfer, Dong Wang, Chao Liu, Zhiyuan Tang, Zhiyong Zhang, Mengyuan Zhao]]&lt;br /&gt;
&lt;br /&gt;
[[文件:PBE.png|200px]]&lt;br /&gt;
*[[媒体文件:Probabilistic_Belief_Embedding_for_Knowledge_Population_(TRP).pdf|TRP-20150012: Probabilistic Belief Embedding for Large-scale Knowledge Population. Miao Fan, Qiang Zhou, Andrew Abel, Thomas Fang Zheng and Ralph Grishman]]&lt;br /&gt;
&lt;br /&gt;
[[文件:fst-fw.png|200px]]&lt;br /&gt;
*[[媒体文件:wpair.pdf|TRP-20150011: Recognize Foreign Low-Frequency Words with Similar Pairs, Xi Ma1, Xiaoxi Wang1 and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Cdae.png|200px]]&lt;br /&gt;
*[[媒体文件:Music.pdf|TRP-20150010: Music Removal by Denoising Autoencoder in Speech Recognition. Mengyuan Zhao, Dong Wang, Zhiyong Zhang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:vmfsne.png|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template-vmfsne.pdf|TRP-20150009: VMF-SNE: Embedding for Spherical Data. Mian Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:ros.png|200px]]&lt;br /&gt;
*[[媒体文件:Ros.pdf|TRP-20150008: Learning Speech Rate in Speech Recognition. Xiangyu Zeng, Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Dnnvadstru.png|200px]]&lt;br /&gt;
*[[媒体文件:DNNVADTRP.pdf|TRP-20150007: Voice Activity Detection Based on Deep Neural Networks. Shi Yin.]] ([[媒体文件:Vad.pdf|Paper submiited to Tsinghua Xuebao]])&lt;br /&gt;
&lt;br /&gt;
[[文件:Uyghur-training.png|200px]]&lt;br /&gt;
*[[媒体文件:UyghurTRP.pdf|TRP-20150006: Low-resource Uyghur Acoustic Model Training based on Cross-lingual Features. Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Beam-forming.png|200px]]&lt;br /&gt;
*[[媒体文件:Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition.pdf|TRP-20150005: Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition. Xuewei Zhang.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Clipping-speaker.png|200px]]&lt;br /&gt;
*[[媒体文件:Clip.pdf|TRP-20150004: Detection and Reconstruction of Clipped Speech in Speaker Recognition. Fanhu Bie et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Semi-dynamic-embedding.png|200px]]&lt;br /&gt;
*[[媒体文件:Taglm.pdf|TRP-20150003: Semi-Dynamic Graph Embedding for Large Scale Language Model Adaptation. Bin Yuan et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Speaker-discriminative-score.png|200px]]&lt;br /&gt;
*[[媒体文件:DNN-based Discriminative Scoring for Speaker.pdf|TRP-20150002: DNN-based Discriminative Scoring for Speaker Recognition Based on i-vector. Jun Wang et al. ]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Noisy-traiing.png|200px]]&lt;br /&gt;
*[[媒体文件:Noisy Training for Deep Neural Networks in.pdf|TRP-20150001: Noisy Training for Deep Neural Networks in Speech Recognition. Shi Yin et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:English-scroing.png|200px]]&lt;br /&gt;
*[[媒体文件:AutomaticScoringforEnglishUtterances.pdf|TRP-20140001: Automatic Scoring for English Utterances. Bo Hu.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Template.rar|CSLT-TRP latex template]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Publication-trp</id>
		<title>Publication-trp</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Publication-trp"/>
				<updated>2016-06-20T19:36:03Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[文件:Aikefu.bmp|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template.pdf|TRP-20160004: A Review of Neural QA, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:simpair.png|200px]]&lt;br /&gt;
*[[媒体文件:Trp20160003.pdf|TRP-20160003: A study of Similar Word Model for Unfrequent Word Enhancement in Speech Recognition, Xi Ma, Dong Wang and Javier Tejedor]]&lt;br /&gt;
&lt;br /&gt;
[[文件:low-freq.png|200px]]&lt;br /&gt;
*[[媒体文件:How to deal with low frequency words.pdf|TRP-20160002: Low-Frequency Words Embedding, Chao Xing, Yiqiao Pan, Dong Wang]]&lt;br /&gt;
[[文件:maxmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:Max-margin.pdf|TRP-20160001: Max-margin metric learning for speaker recognition, Lantian Li, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:lowv.png|200px]]&lt;br /&gt;
*[[媒体文件:Lowv.pdf|TRP-20150033: Learning Ordered Word Representations, Xiaoxi Wang, Chao Xing, Dong Wang, Rong Liu and Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Adamax.png|200px]]&lt;br /&gt;
*[[媒体文件:Adamax Online Training for Speech Recognition.pdf|TRP-20150032: Adamax Online Training for Speech Recognition, Xiangyu Zeng, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Ptrnets.png|200px]]&lt;br /&gt;
*[[媒体文件: Ptrnets.pdf|TRP-20150031: An implementation of Pointer-Networks with Extensions, Xiaoxi Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dvad.png|200px]]&lt;br /&gt;
*[[媒体文件:dvad.pdf|TRP-20150030: DNN-based Voice Activity Detection for Speaker Recognition, Fanhu Bie, Zhiyong Zhang, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:uyghur.jpg|200px]]&lt;br /&gt;
*[[媒体文件:urghur.pdf|TRP-20150029: THUYG-20：A Free Uyghur Speech Database, Askar Rozi, Shi Yin, Zhiyong Zhang, Dong Wang,  Askar Hamdulla]]&lt;br /&gt;
&lt;br /&gt;
[[文件:nnpre.jpg|200px]]&lt;br /&gt;
*[[媒体文件:nnpre.pdf|TRP-20150028: Knowledge Transfer Pre-training, Zhiyuang Tang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:mmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:mmargin.pdf|TRP-20150027: Max-Margin Metric Learning for Speaker Recognition, Lantian Li, Chao Xing, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:binary.jpg|200px]]&lt;br /&gt;
*[[媒体文件:binary.pdf|TRP-20150026: Binary Speaker Embedding, Lantian Li, Chao Xing, Dong Wang, Kaimin Yu, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:rnnrl.png|200px]]&lt;br /&gt;
*[[媒体文件:rnnrl.pdf|TRP-20150025: Relation Classification via Recurrent Neural Network, Dong Xu Zhang, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:dplda.png|200px]]&lt;br /&gt;
*[[媒体文件:dplda.pdf|TRP-20150024: Learning from LDA using Deep Neural Networks, Dongxu Zhang, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:jsrl.png|200px]]&lt;br /&gt;
*[[媒体文件:jsrl.pdf|TRP-20150023: Joint Semantic Relevance Learning with Text Data and Graph Knowledge, Dongxu Zhang, Bin Yuan, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:listnet.png|200px]]&lt;br /&gt;
*[[媒体文件:listnet.pdf|TRP-20150022: Stochastic Top-k ListNet, Tianyi Luo, Dong Wang, Rong Liu, Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:segvector.png|200px]]&lt;br /&gt;
*[[媒体文件:segvector.pdf|TRP-20150021: Improved Deep Speaker Feature Learning for Text-Dependent Speaker Recognition, Lantian Li, Yiye Lin, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Vmclass.png|200px]]&lt;br /&gt;
*[[媒体文件:Vmclass.pdf|TRP-20150020: Document Classification with Spherical Word Vectors, Yiqiao Pan, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Tlearn.png|200px]]&lt;br /&gt;
*[[媒体文件:Tlearn.pdf|TRP-20150019: Transfer Learning for Speech and Language Processing, Dong Wang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Songcisample.png|200px]]&lt;br /&gt;
*[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/7a/Cslt20150018_revisedversion.pdf TRP-20150018:Chinese Song Iambics Generation with Neural Attention-based Model, Qixin Wang, Tianyi Luo, Dong Wang, Chao Xing]]&lt;br /&gt;
&lt;br /&gt;
[[文件:database.jpg|200px]]&lt;br /&gt;
*[[媒体文件:Thuyg20-sre.pdf|TRP-20150017: AN OPEN/FREE DATABASE AND BENCHMARK FOR UYGHUR SPEAKER RECOGNITION, Askar Rozi, Dong Wang, Zhiyong Zhang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Thchs.png|200px]]&lt;br /&gt;
*[[媒体文件:Thchs30.pdf|TRP-20150016: THCHS-30 : A Free Chinese Speech Corpus, Dong Wang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Su.jpg|200px]]&lt;br /&gt;
*[[媒体文件:SUSR.pdf|TRP-20150015: Improving Short Utterance Speaker Recognition by Modeling Speech Unit Classes, Chenhao Zhang Dong Wang, Lantian Li and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dv.png|200px]]&lt;br /&gt;
*[[媒体文件:Dvector.pdf|TRP-20150014: Deep Speaker Vectors for Semi Text-independent Speaker Verification, Lantian Li, Dong Wang, Zhiyong Zhang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dark.png|200px]]&lt;br /&gt;
*[[媒体文件:Dark.pdf|TRP-20150013: Recurrent Neural Network Training with Dark Knowledge Transfer, Dong Wang, Chao Liu, Zhiyuan Tang, Zhiyong Zhang, Mengyuan Zhao]]&lt;br /&gt;
&lt;br /&gt;
[[文件:PBE.png|200px]]&lt;br /&gt;
*[[媒体文件:Probabilistic_Belief_Embedding_for_Knowledge_Population_(TRP).pdf|TRP-20150012: Probabilistic Belief Embedding for Large-scale Knowledge Population. Miao Fan, Qiang Zhou, Andrew Abel, Thomas Fang Zheng and Ralph Grishman]]&lt;br /&gt;
&lt;br /&gt;
[[文件:fst-fw.png|200px]]&lt;br /&gt;
*[[媒体文件:wpair.pdf|TRP-20150011: Recognize Foreign Low-Frequency Words with Similar Pairs, Xi Ma1, Xiaoxi Wang1 and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Cdae.png|200px]]&lt;br /&gt;
*[[媒体文件:Music.pdf|TRP-20150010: Music Removal by Denoising Autoencoder in Speech Recognition. Mengyuan Zhao, Dong Wang, Zhiyong Zhang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:vmfsne.png|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template-vmfsne.pdf|TRP-20150009: VMF-SNE: Embedding for Spherical Data. Mian Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:ros.png|200px]]&lt;br /&gt;
*[[媒体文件:Ros.pdf|TRP-20150008: Learning Speech Rate in Speech Recognition. Xiangyu Zeng, Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Dnnvadstru.png|200px]]&lt;br /&gt;
*[[媒体文件:DNNVADTRP.pdf|TRP-20150007: Voice Activity Detection Based on Deep Neural Networks. Shi Yin.]] ([[媒体文件:Vad.pdf|Paper submiited to Tsinghua Xuebao]])&lt;br /&gt;
&lt;br /&gt;
[[文件:Uyghur-training.png|200px]]&lt;br /&gt;
*[[媒体文件:UyghurTRP.pdf|TRP-20150006: Low-resource Uyghur Acoustic Model Training based on Cross-lingual Features. Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Beam-forming.png|200px]]&lt;br /&gt;
*[[媒体文件:Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition.pdf|TRP-20150005: Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition. Xuewei Zhang.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Clipping-speaker.png|200px]]&lt;br /&gt;
*[[媒体文件:Clip.pdf|TRP-20150004: Detection and Reconstruction of Clipped Speech in Speaker Recognition. Fanhu Bie et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Semi-dynamic-embedding.png|200px]]&lt;br /&gt;
*[[媒体文件:Taglm.pdf|TRP-20150003: Semi-Dynamic Graph Embedding for Large Scale Language Model Adaptation. Bin Yuan et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Speaker-discriminative-score.png|200px]]&lt;br /&gt;
*[[媒体文件:DNN-based Discriminative Scoring for Speaker.pdf|TRP-20150002: DNN-based Discriminative Scoring for Speaker Recognition Based on i-vector. Jun Wang et al. ]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Noisy-traiing.png|200px]]&lt;br /&gt;
*[[媒体文件:Noisy Training for Deep Neural Networks in.pdf|TRP-20150001: Noisy Training for Deep Neural Networks in Speech Recognition. Shi Yin et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:English-scroing.png|200px]]&lt;br /&gt;
*[[媒体文件:AutomaticScoringforEnglishUtterances.pdf|TRP-20140001: Automatic Scoring for English Utterances. Bo Hu.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Template.rar|CSLT-TRP latex template]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Publication-trp</id>
		<title>Publication-trp</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Publication-trp"/>
				<updated>2016-06-20T19:31:49Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[文件:Aikefu.bmp|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template.pdf|TRP-20160004: A Review of Neural QA, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:simpair.png|200px]]&lt;br /&gt;
*[[媒体文件:Trp20160003.pdf|TRP-20160003: A study of Similar Word Model for Unfrequent Word Enhancement in Speech Recognition, Xi Ma, Dong Wang and Javier Tejedor]]&lt;br /&gt;
&lt;br /&gt;
[[文件:low-freq.png|200px]]&lt;br /&gt;
*[[媒体文件:How to deal with low frequency words.pdf|TRP-20160002: Low-Frequency Words Embedding, Chao Xing, Yiqiao Pan, Dong Wang]]&lt;br /&gt;
[[文件:maxmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:Max-margin.pdf|TRP-20160001: Max-margin metric learning for speaker recognition, Lantian Li, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:lowv.png|200px]]&lt;br /&gt;
*[[媒体文件:Lowv.pdf|TRP-20150033: Learning Ordered Word Representations, Xiaoxi Wang, Chao Xing, Dong Wang, Rong Liu and Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Adamax.png|200px]]&lt;br /&gt;
*[[媒体文件:Adamax Online Training for Speech Recognition.pdf|TRP-20150032: Adamax Online Training for Speech Recognition, Xiangyu Zeng, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Ptrnets.png|200px]]&lt;br /&gt;
*[[媒体文件: Ptrnets.pdf|TRP-20150031: An implementation of Pointer-Networks with Extensions, Xiaoxi Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dvad.png|200px]]&lt;br /&gt;
*[[媒体文件:dvad.pdf|TRP-20150030: DNN-based Voice Activity Detection for Speaker Recognition, Fanhu Bie, Zhiyong Zhang, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:uyghur.jpg|200px]]&lt;br /&gt;
*[[媒体文件:urghur.pdf|TRP-20150029: THUYG-20：A Free Uyghur Speech Database, Askar Rozi, Shi Yin, Zhiyong Zhang, Dong Wang,  Askar Hamdulla]]&lt;br /&gt;
&lt;br /&gt;
[[文件:nnpre.jpg|200px]]&lt;br /&gt;
*[[媒体文件:nnpre.pdf|TRP-20150028: Knowledge Transfer Pre-training, Zhiyuang Tang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:mmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:mmargin.pdf|TRP-20150027: Max-Margin Metric Learning for Speaker Recognition, Lantian Li, Chao Xing, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:binary.jpg|200px]]&lt;br /&gt;
*[[媒体文件:binary.pdf|TRP-20150026: Binary Speaker Embedding, Lantian Li, Chao Xing, Dong Wang, Kaimin Yu, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:rnnrl.png|200px]]&lt;br /&gt;
*[[媒体文件:rnnrl.pdf|TRP-20150025: Relation Classification via Recurrent Neural Network, Dong Xu Zhang, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:dplda.png|200px]]&lt;br /&gt;
*[[媒体文件:dplda.pdf|TRP-20150024: Learning from LDA using Deep Neural Networks, Dongxu Zhang, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:jsrl.png|200px]]&lt;br /&gt;
*[[媒体文件:jsrl.pdf|TRP-20150023: Joint Semantic Relevance Learning with Text Data and Graph Knowledge, Dongxu Zhang, Bin Yuan, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:listnet.png|200px]]&lt;br /&gt;
*[[媒体文件:listnet.pdf|TRP-20150022: Stochastic Top-k ListNet, Tianyi Luo, Dong Wang, Rong Liu, Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:segvector.png|200px]]&lt;br /&gt;
*[[媒体文件:segvector.pdf|TRP-20150021: Improved Deep Speaker Feature Learning for Text-Dependent Speaker Recognition, Lantian Li, Yiye Lin, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Vmclass.png|200px]]&lt;br /&gt;
*[[媒体文件:Vmclass.pdf|TRP-20150020: Document Classification with Spherical Word Vectors, Yiqiao Pan, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Tlearn.png|200px]]&lt;br /&gt;
*[[媒体文件:Tlearn.pdf|TRP-20150019: Transfer Learning for Speech and Language Processing, Dong Wang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Songcisample.png|200px]]&lt;br /&gt;
*[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/7a/Cslt20150018_revisedversion.pdf|TRP-20150018: Chinese Song Iambics Generation with Neural Attention-based Model, Qixin Wang, Tianyi Luo, Dong Wang, Chao Xing]]&lt;br /&gt;
&lt;br /&gt;
[[文件:database.jpg|200px]]&lt;br /&gt;
*[[媒体文件:Thuyg20-sre.pdf|TRP-20150017: AN OPEN/FREE DATABASE AND BENCHMARK FOR UYGHUR SPEAKER RECOGNITION, Askar Rozi, Dong Wang, Zhiyong Zhang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Thchs.png|200px]]&lt;br /&gt;
*[[媒体文件:Thchs30.pdf|TRP-20150016: THCHS-30 : A Free Chinese Speech Corpus, Dong Wang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Su.jpg|200px]]&lt;br /&gt;
*[[媒体文件:SUSR.pdf|TRP-20150015: Improving Short Utterance Speaker Recognition by Modeling Speech Unit Classes, Chenhao Zhang Dong Wang, Lantian Li and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dv.png|200px]]&lt;br /&gt;
*[[媒体文件:Dvector.pdf|TRP-20150014: Deep Speaker Vectors for Semi Text-independent Speaker Verification, Lantian Li, Dong Wang, Zhiyong Zhang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dark.png|200px]]&lt;br /&gt;
*[[媒体文件:Dark.pdf|TRP-20150013: Recurrent Neural Network Training with Dark Knowledge Transfer, Dong Wang, Chao Liu, Zhiyuan Tang, Zhiyong Zhang, Mengyuan Zhao]]&lt;br /&gt;
&lt;br /&gt;
[[文件:PBE.png|200px]]&lt;br /&gt;
*[[媒体文件:Probabilistic_Belief_Embedding_for_Knowledge_Population_(TRP).pdf|TRP-20150012: Probabilistic Belief Embedding for Large-scale Knowledge Population. Miao Fan, Qiang Zhou, Andrew Abel, Thomas Fang Zheng and Ralph Grishman]]&lt;br /&gt;
&lt;br /&gt;
[[文件:fst-fw.png|200px]]&lt;br /&gt;
*[[媒体文件:wpair.pdf|TRP-20150011: Recognize Foreign Low-Frequency Words with Similar Pairs, Xi Ma1, Xiaoxi Wang1 and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Cdae.png|200px]]&lt;br /&gt;
*[[媒体文件:Music.pdf|TRP-20150010: Music Removal by Denoising Autoencoder in Speech Recognition. Mengyuan Zhao, Dong Wang, Zhiyong Zhang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:vmfsne.png|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template-vmfsne.pdf|TRP-20150009: VMF-SNE: Embedding for Spherical Data. Mian Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:ros.png|200px]]&lt;br /&gt;
*[[媒体文件:Ros.pdf|TRP-20150008: Learning Speech Rate in Speech Recognition. Xiangyu Zeng, Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Dnnvadstru.png|200px]]&lt;br /&gt;
*[[媒体文件:DNNVADTRP.pdf|TRP-20150007: Voice Activity Detection Based on Deep Neural Networks. Shi Yin.]] ([[媒体文件:Vad.pdf|Paper submiited to Tsinghua Xuebao]])&lt;br /&gt;
&lt;br /&gt;
[[文件:Uyghur-training.png|200px]]&lt;br /&gt;
*[[媒体文件:UyghurTRP.pdf|TRP-20150006: Low-resource Uyghur Acoustic Model Training based on Cross-lingual Features. Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Beam-forming.png|200px]]&lt;br /&gt;
*[[媒体文件:Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition.pdf|TRP-20150005: Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition. Xuewei Zhang.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Clipping-speaker.png|200px]]&lt;br /&gt;
*[[媒体文件:Clip.pdf|TRP-20150004: Detection and Reconstruction of Clipped Speech in Speaker Recognition. Fanhu Bie et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Semi-dynamic-embedding.png|200px]]&lt;br /&gt;
*[[媒体文件:Taglm.pdf|TRP-20150003: Semi-Dynamic Graph Embedding for Large Scale Language Model Adaptation. Bin Yuan et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Speaker-discriminative-score.png|200px]]&lt;br /&gt;
*[[媒体文件:DNN-based Discriminative Scoring for Speaker.pdf|TRP-20150002: DNN-based Discriminative Scoring for Speaker Recognition Based on i-vector. Jun Wang et al. ]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Noisy-traiing.png|200px]]&lt;br /&gt;
*[[媒体文件:Noisy Training for Deep Neural Networks in.pdf|TRP-20150001: Noisy Training for Deep Neural Networks in Speech Recognition. Shi Yin et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:English-scroing.png|200px]]&lt;br /&gt;
*[[媒体文件:AutomaticScoringforEnglishUtterances.pdf|TRP-20140001: Automatic Scoring for English Utterances. Bo Hu.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Template.rar|CSLT-TRP latex template]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Publication-trp</id>
		<title>Publication-trp</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Publication-trp"/>
				<updated>2016-06-20T19:30:35Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[文件:Aikefu.bmp|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template.pdf|TRP-20160004: A Review of Neural QA, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:simpair.png|200px]]&lt;br /&gt;
*[[媒体文件:Trp20160003.pdf|TRP-20160003: A study of Similar Word Model for Unfrequent Word Enhancement in Speech Recognition, Xi Ma, Dong Wang and Javier Tejedor]]&lt;br /&gt;
&lt;br /&gt;
[[文件:low-freq.png|200px]]&lt;br /&gt;
*[[媒体文件:How to deal with low frequency words.pdf|TRP-20160002: Low-Frequency Words Embedding, Chao Xing, Yiqiao Pan, Dong Wang]]&lt;br /&gt;
[[文件:maxmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:Max-margin.pdf|TRP-20160001: Max-margin metric learning for speaker recognition, Lantian Li, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:lowv.png|200px]]&lt;br /&gt;
*[[媒体文件:Lowv.pdf|TRP-20150033: Learning Ordered Word Representations, Xiaoxi Wang, Chao Xing, Dong Wang, Rong Liu and Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Adamax.png|200px]]&lt;br /&gt;
*[[媒体文件:Adamax Online Training for Speech Recognition.pdf|TRP-20150032: Adamax Online Training for Speech Recognition, Xiangyu Zeng, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Ptrnets.png|200px]]&lt;br /&gt;
*[[媒体文件: Ptrnets.pdf|TRP-20150031: An implementation of Pointer-Networks with Extensions, Xiaoxi Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dvad.png|200px]]&lt;br /&gt;
*[[媒体文件:dvad.pdf|TRP-20150030: DNN-based Voice Activity Detection for Speaker Recognition, Fanhu Bie, Zhiyong Zhang, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:uyghur.jpg|200px]]&lt;br /&gt;
*[[媒体文件:urghur.pdf|TRP-20150029: THUYG-20：A Free Uyghur Speech Database, Askar Rozi, Shi Yin, Zhiyong Zhang, Dong Wang,  Askar Hamdulla]]&lt;br /&gt;
&lt;br /&gt;
[[文件:nnpre.jpg|200px]]&lt;br /&gt;
*[[媒体文件:nnpre.pdf|TRP-20150028: Knowledge Transfer Pre-training, Zhiyuang Tang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:mmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:mmargin.pdf|TRP-20150027: Max-Margin Metric Learning for Speaker Recognition, Lantian Li, Chao Xing, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:binary.jpg|200px]]&lt;br /&gt;
*[[媒体文件:binary.pdf|TRP-20150026: Binary Speaker Embedding, Lantian Li, Chao Xing, Dong Wang, Kaimin Yu, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:rnnrl.png|200px]]&lt;br /&gt;
*[[媒体文件:rnnrl.pdf|TRP-20150025: Relation Classification via Recurrent Neural Network, Dong Xu Zhang, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:dplda.png|200px]]&lt;br /&gt;
*[[媒体文件:dplda.pdf|TRP-20150024: Learning from LDA using Deep Neural Networks, Dongxu Zhang, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:jsrl.png|200px]]&lt;br /&gt;
*[[媒体文件:jsrl.pdf|TRP-20150023: Joint Semantic Relevance Learning with Text Data and Graph Knowledge, Dongxu Zhang, Bin Yuan, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:listnet.png|200px]]&lt;br /&gt;
*[[媒体文件:listnet.pdf|TRP-20150022: Stochastic Top-k ListNet, Tianyi Luo, Dong Wang, Rong Liu, Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:segvector.png|200px]]&lt;br /&gt;
*[[媒体文件:segvector.pdf|TRP-20150021: Improved Deep Speaker Feature Learning for Text-Dependent Speaker Recognition, Lantian Li, Yiye Lin, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Vmclass.png|200px]]&lt;br /&gt;
*[[媒体文件:Vmclass.pdf|TRP-20150020: Document Classification with Spherical Word Vectors, Yiqiao Pan, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Tlearn.png|200px]]&lt;br /&gt;
*[[媒体文件:Tlearn.pdf|TRP-20150019: Transfer Learning for Speech and Language Processing, Dong Wang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Songcisample.png|200px]]&lt;br /&gt;
*[[媒体文件:http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/7a/Cslt20150018_revisedversion.pdf|TRP-20150018: Chinese Song Iambics Generation with Neural Attention-based Model, Qixin Wang, Tianyi Luo, Dong Wang, Chao Xing]]&lt;br /&gt;
&lt;br /&gt;
[[文件:database.jpg|200px]]&lt;br /&gt;
*[[媒体文件:Thuyg20-sre.pdf|TRP-20150017: AN OPEN/FREE DATABASE AND BENCHMARK FOR UYGHUR SPEAKER RECOGNITION, Askar Rozi, Dong Wang, Zhiyong Zhang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Thchs.png|200px]]&lt;br /&gt;
*[[媒体文件:Thchs30.pdf|TRP-20150016: THCHS-30 : A Free Chinese Speech Corpus, Dong Wang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Su.jpg|200px]]&lt;br /&gt;
*[[媒体文件:SUSR.pdf|TRP-20150015: Improving Short Utterance Speaker Recognition by Modeling Speech Unit Classes, Chenhao Zhang Dong Wang, Lantian Li and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dv.png|200px]]&lt;br /&gt;
*[[媒体文件:Dvector.pdf|TRP-20150014: Deep Speaker Vectors for Semi Text-independent Speaker Verification, Lantian Li, Dong Wang, Zhiyong Zhang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dark.png|200px]]&lt;br /&gt;
*[[媒体文件:Dark.pdf|TRP-20150013: Recurrent Neural Network Training with Dark Knowledge Transfer, Dong Wang, Chao Liu, Zhiyuan Tang, Zhiyong Zhang, Mengyuan Zhao]]&lt;br /&gt;
&lt;br /&gt;
[[文件:PBE.png|200px]]&lt;br /&gt;
*[[媒体文件:Probabilistic_Belief_Embedding_for_Knowledge_Population_(TRP).pdf|TRP-20150012: Probabilistic Belief Embedding for Large-scale Knowledge Population. Miao Fan, Qiang Zhou, Andrew Abel, Thomas Fang Zheng and Ralph Grishman]]&lt;br /&gt;
&lt;br /&gt;
[[文件:fst-fw.png|200px]]&lt;br /&gt;
*[[媒体文件:wpair.pdf|TRP-20150011: Recognize Foreign Low-Frequency Words with Similar Pairs, Xi Ma1, Xiaoxi Wang1 and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Cdae.png|200px]]&lt;br /&gt;
*[[媒体文件:Music.pdf|TRP-20150010: Music Removal by Denoising Autoencoder in Speech Recognition. Mengyuan Zhao, Dong Wang, Zhiyong Zhang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:vmfsne.png|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template-vmfsne.pdf|TRP-20150009: VMF-SNE: Embedding for Spherical Data. Mian Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:ros.png|200px]]&lt;br /&gt;
*[[媒体文件:Ros.pdf|TRP-20150008: Learning Speech Rate in Speech Recognition. Xiangyu Zeng, Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Dnnvadstru.png|200px]]&lt;br /&gt;
*[[媒体文件:DNNVADTRP.pdf|TRP-20150007: Voice Activity Detection Based on Deep Neural Networks. Shi Yin.]] ([[媒体文件:Vad.pdf|Paper submiited to Tsinghua Xuebao]])&lt;br /&gt;
&lt;br /&gt;
[[文件:Uyghur-training.png|200px]]&lt;br /&gt;
*[[媒体文件:UyghurTRP.pdf|TRP-20150006: Low-resource Uyghur Acoustic Model Training based on Cross-lingual Features. Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Beam-forming.png|200px]]&lt;br /&gt;
*[[媒体文件:Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition.pdf|TRP-20150005: Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition. Xuewei Zhang.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Clipping-speaker.png|200px]]&lt;br /&gt;
*[[媒体文件:Clip.pdf|TRP-20150004: Detection and Reconstruction of Clipped Speech in Speaker Recognition. Fanhu Bie et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Semi-dynamic-embedding.png|200px]]&lt;br /&gt;
*[[媒体文件:Taglm.pdf|TRP-20150003: Semi-Dynamic Graph Embedding for Large Scale Language Model Adaptation. Bin Yuan et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Speaker-discriminative-score.png|200px]]&lt;br /&gt;
*[[媒体文件:DNN-based Discriminative Scoring for Speaker.pdf|TRP-20150002: DNN-based Discriminative Scoring for Speaker Recognition Based on i-vector. Jun Wang et al. ]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Noisy-traiing.png|200px]]&lt;br /&gt;
*[[媒体文件:Noisy Training for Deep Neural Networks in.pdf|TRP-20150001: Noisy Training for Deep Neural Networks in Speech Recognition. Shi Yin et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:English-scroing.png|200px]]&lt;br /&gt;
*[[媒体文件:AutomaticScoringforEnglishUtterances.pdf|TRP-20140001: Automatic Scoring for English Utterances. Bo Hu.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Template.rar|CSLT-TRP latex template]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Conference_Agenda</id>
		<title>Conference Agenda</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Conference_Agenda"/>
				<updated>2016-06-19T00:36:41Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Conference !! Remaining days !! Venue !! submission deadline !! conference date  !! target people &lt;br /&gt;
|-&lt;br /&gt;
|ICASSP 2016   || Pass   ||Shanghai, China || 9/25/2015 || 3/20/2016-3/25/2016  || WD&lt;br /&gt;
|-&lt;br /&gt;
|NAACL 2016   || Pass   ||San Diego, CA || 1/6/2015 || 6/13/2016-6/15/2016  || ZDX&lt;br /&gt;
|-&lt;br /&gt;
|IJCAI 2016    || Pass   ||New York, NY ||  1/27/2016(Abstract);2/2/2016(papers) || 7/9/2016-7/13/2016  || LTY, WQX&lt;br /&gt;
|-&lt;br /&gt;
|ACL 2016    || Pass   || Berlin, Germany ||  2/29/2016(short) 3/18/2016(long) || 8/7/2016–8/12/2016 || LTY, WQX&lt;br /&gt;
|-&lt;br /&gt;
|Interspeech 2016    || Pass   ||San Francisco, CA||  3/30/2016 || 9/8/2016-9/12/2016  ||&lt;br /&gt;
|-&lt;br /&gt;
|ICDM 2016   || Pass     ||Venice, Italy ||  5/7/2016 || 11/7/2016-11/8/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|NIPS 2016   || Pass     ||Barcelona Spain  ||  5/20/2016 || 11/7/2016-11/8/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|APSIPA 2016  ||  Pass   ||Jeju, Korea ||  6/15/2016 || 12/13/2016-12/16/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|CCL 2016   ||   Pass   ||Qingdao, China||  6/1/2016 || 10/15/2016-10/16/2016  || LTY&lt;br /&gt;
|-&lt;br /&gt;
|EMNLP 2016   ||  Pass    ||Austin, TX ||  6/3/2016 || 12/5/2016-12/10/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|ISCSLP 2016  ||     Pass      ||Tianjin, CN ||  6/17/2016 || 10/17/2016-10/20/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|BIC 2016  ||     Pass      ||Beijing, CN ||  6/10/2016 || 11/28/2016-11/30/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|OCOCOSDA 2016  ||  11 days        ||Baili ||  6/29/2016 || 10/26/2016-10/28/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|COLING 2016   || 27 days     ||Osaka, Japan ||  7/15/2016 || 12/11/2016-12/16/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.cs.rochester.edu/~tetreaul/conferences.html NLP conference list from Joel Tetreault]&lt;br /&gt;
&lt;br /&gt;
[http://www.aclclp.org.tw/confer_c.php CFP list from ACLCLP]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[past-conf-2014|2014]]&lt;br /&gt;
&lt;br /&gt;
[[past-conf-2015|2015]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-09</id>
		<title>Tianyi Luo 2016-05-09</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-09"/>
				<updated>2016-05-09T10:46:34Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-05-02~05-04&lt;br /&gt;
* The Holiday of Labors Day.&lt;br /&gt;
--------------------2016-05-05&lt;br /&gt;
* Finish the preprocessing of Xiaobing corpus.&lt;br /&gt;
* Implement part of code about the qqa (Current sample is q1; Positive sample is a1; Negtive sample is q2.) max-margin theano version.&lt;br /&gt;
--------------------2016-05-06&lt;br /&gt;
* Finish implementing code about the qqa (Current sample is q1; Positive sample is a1; Negtive sample is q2.) max-margin theano version.&lt;br /&gt;
--------------------2016-05-07&lt;br /&gt;
* Wait for the results of experiments.&lt;br /&gt;
* Ready for going to Silicon Valley.&lt;br /&gt;
--------------------2016-05-08&lt;br /&gt;
* Arrived in Silicon Valley.&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02</id>
		<title>Tianyi Luo 2016-05-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02"/>
				<updated>2016-05-09T10:44:08Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-25&lt;br /&gt;
* Conduct some preprocessing of xiaobing corpus.&lt;br /&gt;
--------------------2016-04-26&lt;br /&gt;
* Help jiyuan to understand the music generation paper.&lt;br /&gt;
* Check jiyuan's code(from RNN-RBM to LSTM-RBM).&lt;br /&gt;
--------------------2016-04-27&lt;br /&gt;
* Modify the LSTM Max margin vector training code. Its cost is lower than RNN and the training speed is slower than RNN. The performance improves from 82.39% to 84.83%.&lt;br /&gt;
--------------------2016-04-28&lt;br /&gt;
* Finish part of code about qaa(Current sample is q1; Positive sample is a1; Negtive sampling is a2.) max margin theano version.&lt;br /&gt;
--------------------2016-04-29&lt;br /&gt;
* Finish the code about qaa(Current sample is q1; Positive sample is a1; Negtive sampling is a2.) max margin theano version.&lt;br /&gt;
--------------------2016-04-30~2016-05-01&lt;br /&gt;
The Holiday of Labors Day&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-09</id>
		<title>Tianyi Luo 2016-05-09</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-09"/>
				<updated>2016-05-09T10:39:49Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：以“=== Plan to do this week === * To implement tensorflow version of RNN/LSTM Max margin vector training. * To implement attention chatting model with xiaobing corpus....”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-05-02~05-04&lt;br /&gt;
* The Holiday of Labors Day.&lt;br /&gt;
--------------------2016-05-05&lt;br /&gt;
* Finish the preprocessing of Xiaobing corpus.&lt;br /&gt;
* Implement the max-margin theano version(Sample training pair is: q1,a1,q2. It means the negative sample is q.).&lt;br /&gt;
--------------------2016-05-06&lt;br /&gt;
* Implement the max-margin theano version(Sample training pair is: q1,a1,a2. It means the negative sample is a.).&lt;br /&gt;
--------------------2016-05-07&lt;br /&gt;
* Wait for the results of experiments.&lt;br /&gt;
* Ready for going to Silicon Valley.&lt;br /&gt;
--------------------2016-05-08&lt;br /&gt;
* Arrived in Silicon Valley.&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2016-05-09</id>
		<title>2016-05-09</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2016-05-09"/>
				<updated>2016-05-09T10:23:34Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Mengyuan Zhao 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Lantian Li 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Yang Wang 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Xuewei Zhang 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Zhiyong Zhang 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Chao Xing 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Zhiyuan Tang 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Aodong Li 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[jiyuan Zhang 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Yiqiao Pan 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Xiangyu Zeng 2016-05-09]]&lt;br /&gt;
&lt;br /&gt;
[[Tianyi Luo 2016-05-09]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02</id>
		<title>Tianyi Luo 2016-05-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02"/>
				<updated>2016-04-28T09:13:25Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-25&lt;br /&gt;
* Conduct some preprocess of xiaobing corpus.&lt;br /&gt;
--------------------2016-04-26&lt;br /&gt;
* Help jiyuan to understand the music generation paper.&lt;br /&gt;
* Check jiyuan's code(from RNN-RBM to LSTM-RBM).&lt;br /&gt;
--------------------2016-04-27&lt;br /&gt;
* Modify the LSTM Max margin vector training code. Its cost is lower than RNN and the training speed is slower than RNN. The performance improves from 82.39% to 84.83%.&lt;br /&gt;
--------------------2016-04-28&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-29&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-30&lt;br /&gt;
&lt;br /&gt;
--------------------2016-05-01&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02</id>
		<title>Tianyi Luo 2016-05-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02"/>
				<updated>2016-04-28T06:26:44Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-25&lt;br /&gt;
* Conduct some preprocess of xiaobing corpus.&lt;br /&gt;
--------------------2016-04-26&lt;br /&gt;
* Help jiyuan to understand the music generation paper.&lt;br /&gt;
* Check jiyuan's code(from RNN-RBM to LSTM-RBM).&lt;br /&gt;
--------------------2016-04-27&lt;br /&gt;
* Modify the LSTM Max margin vector training. Its cost is lower than RNN and the training speed is slower than RNN. The performance improve from 82.39% to 84.83%.&lt;br /&gt;
--------------------2016-04-28&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-29&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-30&lt;br /&gt;
&lt;br /&gt;
--------------------2016-05-01&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02</id>
		<title>Tianyi Luo 2016-05-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02"/>
				<updated>2016-04-27T18:52:03Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-25&lt;br /&gt;
* Conduct some preprocess of xiaobing corpus.&lt;br /&gt;
--------------------2016-04-26&lt;br /&gt;
* Help jiyuan to understand the music generation paper.&lt;br /&gt;
* Check jiyuan's code(from RNN-RBM to LSTM-RBM).&lt;br /&gt;
--------------------2016-04-27&lt;br /&gt;
* Modify the LSTM Max margin vector training. Its cost is lower than RNN and the training speed is slower than RNN.&lt;br /&gt;
--------------------2016-04-28&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-29&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-30&lt;br /&gt;
&lt;br /&gt;
--------------------2016-05-01&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02</id>
		<title>Tianyi Luo 2016-05-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02"/>
				<updated>2016-04-27T18:50:58Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-25&lt;br /&gt;
--------------------2016-04-26&lt;br /&gt;
* Help jiyuan to understand the music generation paper.&lt;br /&gt;
* Check jiyuan's code(from RNN-RBM to LSTM-RBM).&lt;br /&gt;
--------------------2016-04-27&lt;br /&gt;
* Modify the LSTM Max margin vector training. Its cost is lower than RNN and the training speed is slower than RNN.&lt;br /&gt;
--------------------2016-04-28&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-29&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-30&lt;br /&gt;
&lt;br /&gt;
--------------------2016-05-01&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02</id>
		<title>Tianyi Luo 2016-05-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02"/>
				<updated>2016-04-27T18:49:10Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-25&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-26&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-27&lt;br /&gt;
* Implement LSTM Max margin vector training. Its cost is lower than RNN and the training speed is slower than RNN.&lt;br /&gt;
--------------------2016-04-28&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-29&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-30&lt;br /&gt;
&lt;br /&gt;
--------------------2016-05-01&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02</id>
		<title>Tianyi Luo 2016-05-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02"/>
				<updated>2016-04-27T18:47:33Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-25&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-26&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-27&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-28&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-29&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-30&lt;br /&gt;
&lt;br /&gt;
--------------------2016-05-01&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02</id>
		<title>Tianyi Luo 2016-05-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-05-02"/>
				<updated>2016-04-27T18:46:49Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：以“=== Plan to do this week === * To implement tensorflow version of RNN/LSTM Max margin vector training. === Work done in this week === --------------------2016-04-26...”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-26&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-27&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-28&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-29&lt;br /&gt;
&lt;br /&gt;
--------------------2016-04-30&lt;br /&gt;
&lt;br /&gt;
--------------------2016-05-01&lt;br /&gt;
&lt;br /&gt;
--------------------2016-05-02&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2016-05-02</id>
		<title>2016-05-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2016-05-02"/>
				<updated>2016-04-27T18:45:12Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：以“Tianyi Luo 2016-05-02”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Tianyi Luo 2016-05-02]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Status_report</id>
		<title>Status report</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Status_report"/>
				<updated>2016-04-27T18:44:46Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[[2016-01-04]]&lt;br /&gt;
&lt;br /&gt;
[[2016-01-11]]&lt;br /&gt;
&lt;br /&gt;
[[2016-01-18]]&lt;br /&gt;
&lt;br /&gt;
[[2016-01-25]]&lt;br /&gt;
&lt;br /&gt;
[[2016-02-01]]&lt;br /&gt;
&lt;br /&gt;
[[2016-02-22]]&lt;br /&gt;
&lt;br /&gt;
[[2016-02-29]]&lt;br /&gt;
&lt;br /&gt;
[[2016-03-07]]&lt;br /&gt;
&lt;br /&gt;
[[2016-03-14]]&lt;br /&gt;
&lt;br /&gt;
[[2016-03-21]]&lt;br /&gt;
&lt;br /&gt;
[[2016-03-28]]&lt;br /&gt;
&lt;br /&gt;
[[2016-04-04]]&lt;br /&gt;
&lt;br /&gt;
[[2016-04-11]]&lt;br /&gt;
&lt;br /&gt;
[[2016-04-18]]&lt;br /&gt;
&lt;br /&gt;
[[2016-04-25]]&lt;br /&gt;
&lt;br /&gt;
[[2016-05-02]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-25T00:36:46Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Help conduct the presentation of text group(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rule 1(dpk, ix, lpk sv + machine number) to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules 2+3(dps, s, e, fp, tps, q, 200 + machine number and machine number only) to improve the accuracy from 58% to 72.09%.&lt;br /&gt;
* Use entity match rules 4(dpk, ix, lpk sv, dps, s, e, fp, tps, q + machine number + English chacracters) to improve the accuracy from 72.09% to 72.86%.&lt;br /&gt;
* Use entity match rules 5(machine number + English chacracters) to improve the accuracy from 72.86% to 76.48%.&lt;br /&gt;
* Use simlar pair match rules 1(“快”和“速度”、“复写”和“拷贝”等等) to improve the accuracy from 76.48% to 82.39%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Conference_Agenda</id>
		<title>Conference Agenda</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Conference_Agenda"/>
				<updated>2016-04-24T16:34:30Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Conference !! Remaining days !! Venue !! submission deadline !! conference date  !! target people &lt;br /&gt;
|-&lt;br /&gt;
|ICASSP 2016   || Pass   ||Shanghai, China || 9/25/2015 || 3/20/2016-3/25/2016  || WD&lt;br /&gt;
|-&lt;br /&gt;
|NAACL 2016   || Pass   ||San Diego, CA || 1/6/2015 || 6/13/2016-6/15/2016  || ZDX&lt;br /&gt;
|-&lt;br /&gt;
|IJCAI 2016    || Pass   ||New York, NY ||  1/27/2016(Abstract);2/2/2016(papers) || 7/9/2016-7/13/2016  || LTY, WQX&lt;br /&gt;
|-&lt;br /&gt;
|ACL 2016    || Pass   || Berlin, Germany ||  2/29/2016(short) 3/18/2016(long) || 8/7/2016–8/12/2016 || LTY, WQX&lt;br /&gt;
|-&lt;br /&gt;
|Interspeech 2016    || Pass   ||San Francisco, CA||  3/30/2016 || 9/8/2016-9/12/2016  ||&lt;br /&gt;
|-&lt;br /&gt;
|ICDM 2016   || 12 days     ||Venice, Italy ||  5/7/2016 || 11/7/2016-11/8/2016  || LTY&lt;br /&gt;
|-&lt;br /&gt;
|NIPS 2016   || 22 days     ||Barcelona Spain  ||  5/20/2016 || 11/7/2016-11/8/2016  || LTY&lt;br /&gt;
|-&lt;br /&gt;
|APSIPA 2016  ||            ||Jeju, Korea ||  5/31/2016 || 12/13/2016-12/16/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|CCL 2016   ||               ||Qingdao, China||  6/1/2016 || 10/15/2016-10/16/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|EMNLP 2016   || 41 days     ||Austin, TX ||  6/3/2016 || 12/5/2016-12/10/2016  || LTY&lt;br /&gt;
|-&lt;br /&gt;
|ISCSLP 2016  ||              ||Tianjin, CN ||  6/3/2016 || 10/17/2016-10/20/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|OCOCOSDA 2016  ||            ||Baili ||  6/29/2016 || 10/26/2016-10/28/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|COLING 2016   || 82 days     ||Osaka, Japan ||  7/15/2016 || 12/11/2016-12/16/2016  || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.cs.rochester.edu/~tetreaul/conferences.html NLP conference list from Joel Tetreault]&lt;br /&gt;
&lt;br /&gt;
[http://www.aclclp.org.tw/confer_c.php CFP list from ACLCLP]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[past-conf-2014|2014]]&lt;br /&gt;
&lt;br /&gt;
[[past-conf-2015|2015]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T16:20:30Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Help conduct the presentation of text group(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rule 1(dpk, ix, lpk sv + machine number) to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules 2+3(dps, s, e, fp, tps, q, 200 + machine number and machine number only) to improve the accuracy from 58% to 72.09%.&lt;br /&gt;
* Use entity match rules 4(dpk, ix, lpk sv, dps, s, e, fp, tps, q + machine number + English chacracters) to improve the accuracy from 72.09% to 72.86%.&lt;br /&gt;
* Use entity match rules 5(machine number + English chacracters) to improve the accuracy from 72.86% to 76.48%.&lt;br /&gt;
* Use simlar pair match rules 1(“快”和“速度”、“复写”和“拷贝”等等) to improve the accuracy from 76.48% to ？%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T15:56:26Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rule 1(dpk, ix, lpk sv + machine number) to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules 2+3(dps, s, e, fp, tps, q, 200 + machine number and machine number only) to improve the accuracy from 58% to 72.09%.&lt;br /&gt;
* Use entity match rules 4(dpk, ix, lpk sv, dps, s, e, fp, tps, q + machine number + English chacracters) to improve the accuracy from 72.09% to 72.86%.&lt;br /&gt;
* Use entity match rules 5(machine number + English chacracters) to improve the accuracy from 72.86% to 76.48%.&lt;br /&gt;
* Use simlar pair match rules 1(“快”和“速度”、“复写”和“拷贝”等等) to improve the accuracy from 76.48% to ？%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T15:56:02Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rule 1(dpk, ix, lpk sv + machine number) to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules 2+3(dps, s, e, fp, tps, q, 200 + machine number and machine number only) to improve the accuracy from 58% to 72.09%.&lt;br /&gt;
* Use entity match rules 4(dpk, ix, lpk sv, dps, s, e, fp, tps, q + machine number + English chacracters) to improve the accuracy from 72.09% to 72.86%.&lt;br /&gt;
* Use entity match rules 5(machine number + English chacracters) to improve the accuracy from 72.86% to 76.48%.&lt;br /&gt;
* Use simlar pair match rules 1(“快”和“速度”、“复写”和“拷贝”等等) to improve the accuracy from 72.86% to 76.48%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T15:42:24Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rule 1(dpk, ix, lpk sv + machine number) to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules 2+3(dps, s, e, fp, tps, q, 200 + machine number and machine number only) to improve the accuracy from 58% to 72.09%.&lt;br /&gt;
* Use entity match rules 4(dpk, ix, lpk sv, dps, s, e, fp, tps, q + machine number + English chacracters) to improve the accuracy from 72.09% to 72.86%.&lt;br /&gt;
* Use entity match rules 5(machine number + English chacracters) to improve the accuracy from 72.86% to 76.48%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T14:36:44Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rule 1(dpk, ix, lpk sv + machine number) to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules 2+3(dps, s, e, fp, tps, q, 200 + machine number and machine number only) to improve the accuracy from 58% to 72.09%.&lt;br /&gt;
* Use entity match rules 4(dpk, ix, lpk sv, dps, s, e, fp, tps, q + machine number + English chacracters) to improve the accuracy from 72.09% to 72.86%.&lt;br /&gt;
* Use entity match rules 5(machine number + English chacracters) to improve the accuracy from 72.86% to ?%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T14:35:26Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rule 1(dpk, ix, lpk sv + machine number) to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules 2+3(dps, s, e, fp, tps, q, 200 + machine number and machine number only) to improve the accuracy from 58% to 72.09%.&lt;br /&gt;
* Use entity match rules 4(dpk, ix, lpk sv, dps, s, e, fp, tps, q + machine number + English chacracters) to improve the accuracy from 72.09% to 72.86%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T14:12:04Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rule 1(dpk, ix, lpk sv + machine number) to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules 2+3(dps, s, e, fp, tps, q, 200 + machine number and machine number only) to improve the accuracy from 58% to 72%.&lt;br /&gt;
* Use entity match rules 4(dpk, ix, lpk sv, dps, s, e, fp, tps, q + machine number + English chacracters) to improve the accuracy from 58% to 72%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T13:36:06Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rules(dpk, ix, lpk sv + machine number) to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules(dps, s, e, fp, tps, q, 200 + machine number and machine number only) to improve the accuracy from 58% to 72%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T13:22:05Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rules to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules to improve the accuracy from 58% to 72%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T13:21:01Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters)&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rules to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules to improve the accuracy from 58% to 72%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
* To implement attention chatting model with xiaobing corpus.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25</id>
		<title>Tianyi Luo 2016-04-25</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Tianyi_Luo_2016-04-25"/>
				<updated>2016-04-24T13:20:05Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Plan to do this week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
=== Work done in this week ===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters)&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-23&lt;br /&gt;
* Use entity match rules to improve the accuracy from 38% to 58%.&lt;br /&gt;
--------------------2016-04-24&lt;br /&gt;
* Use entity match rules to improve the accuracy from 58% to 72%.&lt;br /&gt;
=== Plan to do next week ===&lt;br /&gt;
* To implement tensorflow version of RNN/LSTM Max margin vector training.&lt;br /&gt;
===Interested papers ===&lt;br /&gt;
*Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [[http://zheng-wen.com/Cascading_Bandit_Paper.pdf pdf]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2016-04-23T10:42:40Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Text Processing Team Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
===Former Members===&lt;br /&gt;
* Rong Liu (刘荣) : 优酷&lt;br /&gt;
* Xiaoxi Wang (王晓曦) : 图灵机器人&lt;br /&gt;
* Xi Ma (马习) : 清华大学研究生&lt;br /&gt;
* DongXu Zhang (张东旭) : --&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
* Tianyi Luo (骆天一)&lt;br /&gt;
* Chao Xing (邢超)&lt;br /&gt;
* Qixin Wang (王琪鑫)&lt;br /&gt;
* Yiqiao Pan (潘一桥)&lt;br /&gt;
&lt;br /&gt;
==Work Process==&lt;br /&gt;
===Similar questions senetence vector model training with RNN/LSTM and the attention RNN/LSTM chatting model training (Tianyi Luo)===&lt;br /&gt;
--------------------2016-04-22&lt;br /&gt;
* Speed up process of the test performance about theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
===Reproduce DSSM Baseline (Chao Xing)===&lt;br /&gt;
: 2016-04-23 : Set a series of experiment set.&lt;br /&gt;
               1. Try deep CNN-DSSM, current model just follow proposed model contain one convolution layer, need to be a tuneable parameter.&lt;br /&gt;
               2. Test whether mixture data effective to current model and deep CDSSM.&lt;br /&gt;
               3. Code Recurrent CNN-DSSM (new approach.)&lt;br /&gt;
: 2016-04-22 : Find a problem : Use labs' gpu machine 970 iteration per time is 1537 second but huilan's server is just 7 second.&lt;br /&gt;
               Achieve reasonable results when apply max-margin method to CNN-DSSM model.&lt;br /&gt;
: 2016-04-21 : True DSSM model doesn't work well, analysis as below:&lt;br /&gt;
                1. Not exactly reproduce DSSM model, because the original one is English version, I just adapt it to Chinese but after word segmentation. &lt;br /&gt;
                   So the input is tri-gram words not tri-gram letter.&lt;br /&gt;
                2. Our dataset far from rich, because of we do not use pre-trained word vectors as initial vectors, we can hardly achieve good performance.&lt;br /&gt;
             : Request&lt;br /&gt;
                1. As we have rich pre-trained word vectors, maybe CDSSM or RDSSM corrected to our task.&lt;br /&gt;
                2. Different length of sequences seek to be fixed dimension vectors, just CNN and RNN can do such things, DNN can not do it by using &lt;br /&gt;
                  fix length of word vectors&lt;br /&gt;
             : Coding done CDSSM. Test for it's performance.&lt;br /&gt;
                One problem : When you install tensorflow by pip 0.8.0 and you want to use conv2d function by gpu, you need make sure you had already &lt;br /&gt;
                             install your cudnn's version as 4.0 not lastest 5.0.&lt;br /&gt;
: 2016-04-20 : Find reproduced DSSM model's bug, fix it.&lt;br /&gt;
: 2016-04-19 : Code mixture data model by less memory dependency done. Test it's performance.&lt;br /&gt;
: 2016-04-18 : Code mixture data model.&lt;br /&gt;
: 2016-04-16 : Code mixture data model, but face to memory error. Dr. Wang help me fix it.&lt;br /&gt;
: 2016-04-15 : Share Papers. Investigation a series of DSSM papers for future work. And show our intern students how to do research.&lt;br /&gt;
             : Original DSSM model : Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/2013_-_Learning_Deep_Structured_Semantic_Models_for_Web_Search_using_Clickthrough_Data_-_Report.pdf pdf]&lt;br /&gt;
             : CNN based DSSM model : A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b7/2014_-_A_Latent_Semantic_Model_with_Convolutional-Pooling_Structure_for_Information_Retrieval_-_Report.pdf pdf]&lt;br /&gt;
             : Use DSSM model for a new area : Modeling Interestingness with Deep Neural Networks [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/1f/2014_-_Modeling_Interestingness_with_Deep_Neural_Networks_-_Report.pdf pdf]&lt;br /&gt;
             : Latest approach for LSTM + RNN DSSM model : SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/24/2015_-_SEMANTIC_MODELLING_WITH_LONG-SHORT-TERM_MEMORY_FOR_INFORMATION_RETRIEVAL_-_Report.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
: 2016-04-14 : Test dssm-dnn model, code dssm-cnn model.&lt;br /&gt;
               Continue investigate deep neural question answering system.&lt;br /&gt;
: 2016-04-13 : test dssm model, investigate deep neural question answering system.&lt;br /&gt;
             : Share theano ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Theano-RBM.pptx theano]&lt;br /&gt;
             : Share tensorflow ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow.pptx tensorflow]&lt;br /&gt;
: 2016-04-12 : Write done dssm tensor flow version.&lt;br /&gt;
: 2016-04-11 : Write tensorflow toolkit ppt for intern student.&lt;br /&gt;
: 2016-04-10 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-09 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-08 : Finish theano version.&lt;br /&gt;
&lt;br /&gt;
===Deep Poem Processing With Image (Ziwei Bai)===&lt;br /&gt;
: 2016-04-20 :combine my program with Qixin Wang's&lt;br /&gt;
: 2016-04-10 : web spider to catch a thousand pices of images.&lt;br /&gt;
: 2016-04-13 :1、download theano for python2.7。  2.debug cnn.py&lt;br /&gt;
: 2016-04-15 :web spider to catch 30 thousands pices of images and store them into a matrix&lt;br /&gt;
: 2016-04-16 :modify the code of CNN and spider&lt;br /&gt;
: 2016-04-17 :train convouloutional neural network&lt;br /&gt;
&lt;br /&gt;
===RNN Music Processing for lyric (Shiyao Li)===&lt;br /&gt;
: 2016-04-20 : learn LSTM&lt;br /&gt;
: 2016-04-09 : web spider to catch a thousand pieces of lyrics.&lt;br /&gt;
: 2016-04-10 : extract the keywords in the lyrics&lt;br /&gt;
: 2016-04-13 :Read paper Memory Network.&lt;br /&gt;
: 2016-04-15 :read the paper Memory Network and start to understand its code&lt;br /&gt;
: 2016-04-17 :read paper end to end memory network&lt;br /&gt;
&lt;br /&gt;
===RNN Key word Poem Processing (Yi Xiong)===&lt;br /&gt;
: 2016-04-22 : Code a web spider to recursively catch link of keywords from Baidu&lt;br /&gt;
: 2016-04-09 : Database for N-Gram data storing&lt;br /&gt;
: 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation&lt;br /&gt;
: 2016-04-13 : segmentation result analysis&lt;br /&gt;
: 2016-04-15 :improve the simple bigram segmentation&lt;br /&gt;
: 2016-04-16 :compare the result of bigram segmentation with dictionary segmentation&lt;br /&gt;
: 2016-04-17 :learn python (head first 50%)&lt;br /&gt;
: 2016-04-20 : learn web spider&lt;br /&gt;
&lt;br /&gt;
===RNN Piano Processing (Jiyuan Zhang)===&lt;br /&gt;
:2016-4-12：select appropriate  midis and run rnnrbm model&lt;br /&gt;
:2016-4-13：view  rnnrbm model‘s  code&lt;br /&gt;
:2016-4-14~15:coding to select 4/4 beat of midis&lt;br /&gt;
:2016-4-17~22:run data, failed several times ，then modify code  and  view rnnrbm model's code&lt;br /&gt;
&lt;br /&gt;
===Recommendation System (Tong Liu)===&lt;br /&gt;
: 2016-04-09 : 1.read a review:Machine learning:Trends,perspectives, and prospects 2.learn python ,can operate dict and set&lt;br /&gt;
: 2016-04-12 : 1.read paper Collaborative Deep Learning for Recommender Systems  and take notes.2. learn the concepts of stacked denoising autoencoder(SDAE).&lt;br /&gt;
: 2016-04-17 :1.allocate PuTTy and Xming 2.learn python, can operate slice and iterator 3.learn release and datasets of a paper: Collaborative Deep Learning for Recommender Systems&lt;br /&gt;
&lt;br /&gt;
===Question &amp;amp; Answering (Aiting Liu)===&lt;br /&gt;
: 2016-04-20 : read Fader's paper ()2013 &lt;br /&gt;
: 2016-04-15 :learn dssm and sent2vec&lt;br /&gt;
: 2016-04-16 :try to figure out how thePARALAX dataset is constructed&lt;br /&gt;
: 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2016-04-22T03:25:36Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Text Processing Team Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
===Former Members===&lt;br /&gt;
* Rong Liu (刘荣) : 优酷&lt;br /&gt;
* Xiaoxi Wang (王晓曦) : 图灵机器人&lt;br /&gt;
* Xi Ma (马习) : 清华大学研究生&lt;br /&gt;
* DongXu Zhang (张东旭) : --&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
* Tianyi Luo (骆天一)&lt;br /&gt;
* Chao Xing (邢超)&lt;br /&gt;
* Qixin Wang (王琪鑫)&lt;br /&gt;
* Yiqiao Pan (潘一桥)&lt;br /&gt;
&lt;br /&gt;
==Work Process==&lt;br /&gt;
===Similar questions senetence vector model training with RNN/LSTM and the attention RNN/LSTM chatting model training (Tianyi Luo)===&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
===Reproduce DSSM Baseline (Chao Xing)===&lt;br /&gt;
: 2016-04-21 : True DSSM model doesn't work well, analysis as below:&lt;br /&gt;
                1. Not exactly reproduce DSSM model, because the original one is English version, I just adapt it to Chinese but after word segmentation. &lt;br /&gt;
                   So the input is tri-gram words not tri-gram letter.&lt;br /&gt;
                2. Our dataset far from rich, because of we do not use pre-trained word vectors as initial vectors, we can hardly achieve good performance.&lt;br /&gt;
             : Request&lt;br /&gt;
                1. As we have rich pre-trained word vectors, maybe CDSSM or RDSSM corrected to our task.&lt;br /&gt;
                2. Different length of sequences seek to be fixed dimension vectors, just CNN and RNN can do such things, DNN can not do it by using &lt;br /&gt;
                  fix length of word vectors&lt;br /&gt;
             : Coding done CDSSM. Test for it's performance.&lt;br /&gt;
                One problem : When you install tensorflow by pip 0.8.0 and you want to use conv2d function by gpu, you need make sure you had already &lt;br /&gt;
                             install your cudnn's version as 4.0 not lastest 5.0.&lt;br /&gt;
: 2016-04-20 : Find reproduced DSSM model's bug, fix it.&lt;br /&gt;
: 2016-04-19 : Code mixture data model by less memory dependency done. Test it's performance.&lt;br /&gt;
: 2016-04-18 : Code mixture data model.&lt;br /&gt;
: 2016-04-16 : Code mixture data model, but face to memory error. Dr. Wang help me fix it.&lt;br /&gt;
: 2016-04-15 : Share Papers. Investigation a series of DSSM papers for future work. And show our intern students how to do research.&lt;br /&gt;
             : Original DSSM model : Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/2013_-_Learning_Deep_Structured_Semantic_Models_for_Web_Search_using_Clickthrough_Data_-_Report.pdf pdf]&lt;br /&gt;
             : CNN based DSSM model : A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b7/2014_-_A_Latent_Semantic_Model_with_Convolutional-Pooling_Structure_for_Information_Retrieval_-_Report.pdf pdf]&lt;br /&gt;
             : Use DSSM model for a new area : Modeling Interestingness with Deep Neural Networks [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/1f/2014_-_Modeling_Interestingness_with_Deep_Neural_Networks_-_Report.pdf pdf]&lt;br /&gt;
             : Latest approach for LSTM + RNN DSSM model : SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/24/2015_-_SEMANTIC_MODELLING_WITH_LONG-SHORT-TERM_MEMORY_FOR_INFORMATION_RETRIEVAL_-_Report.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
: 2016-04-14 : Test dssm-dnn model, code dssm-cnn model.&lt;br /&gt;
               Continue investigate deep neural question answering system.&lt;br /&gt;
: 2016-04-13 : test dssm model, investigate deep neural question answering system.&lt;br /&gt;
             : Share theano ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Theano-RBM.pptx theano]&lt;br /&gt;
             : Share tensorflow ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow.pptx tensorflow]&lt;br /&gt;
: 2016-04-12 : Write done dssm tensor flow version.&lt;br /&gt;
: 2016-04-11 : Write tensorflow toolkit ppt for intern student.&lt;br /&gt;
: 2016-04-10 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-09 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-08 : Finish theano version.&lt;br /&gt;
&lt;br /&gt;
===Deep Poem Processing With Image (Ziwei Bai)===&lt;br /&gt;
: 2016-04-20 :combine my program with Qixin Wang's&lt;br /&gt;
: 2016-04-10 : web spider to catch a thousand pices of images.&lt;br /&gt;
: 2016-04-13 :1、download theano for python2.7。  2.debug cnn.py&lt;br /&gt;
: 2016-04-15 :web spider to catch 30 thousands pices of images and store them into a matrix&lt;br /&gt;
: 2016-04-16 :modify the code of CNN and spider&lt;br /&gt;
: 2016-04-17 :train convouloutional neural network&lt;br /&gt;
&lt;br /&gt;
===RNN Music Processing for lyric (Shiyao Li)===&lt;br /&gt;
: 2016-04-20 : learn LSTM&lt;br /&gt;
: 2016-04-09 : web spider to catch a thousand pieces of lyrics.&lt;br /&gt;
: 2016-04-10 : extract the keywords in the lyrics&lt;br /&gt;
: 2016-04-13 :Read paper Memory Network.&lt;br /&gt;
: 2016-04-15 :read the paper Memory Network and start to understand its code&lt;br /&gt;
: 2016-04-17 :read paper end to end memory network&lt;br /&gt;
&lt;br /&gt;
===RNN Key word Poem Processing (Yi Xiong)===&lt;br /&gt;
: 2016-04-20 : learn web spider&lt;br /&gt;
: 2016-04-09 : Database for N-Gram data storing&lt;br /&gt;
: 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation&lt;br /&gt;
: 2016-04-13 : segmentation result analysis&lt;br /&gt;
: 2016-04-15 :improve the simple bigram segmentation&lt;br /&gt;
: 2016-04-16 :compare the result of bigram segmentation with dictionary segmentation&lt;br /&gt;
: 2016-04-17 :learn python (head first 50%)&lt;br /&gt;
&lt;br /&gt;
===RNN Piano Processing (Jiyuan Zhang)===&lt;br /&gt;
:2016-4-12：select appropriate  midis and run rnnrbm model&lt;br /&gt;
:2016-4-13：view  rnnrbm model‘s  code&lt;br /&gt;
&lt;br /&gt;
===Recommendation System (Tong Liu)===&lt;br /&gt;
: 2016-04-09 : 1.read a review:Machine learning:Trends,perspectives, and prospects 2.learn python ,can operate dict and set&lt;br /&gt;
: 2016-04-12 : 1.read paper Collaborative Deep Learning for Recommender Systems  and take notes.2. learn the concepts of stacked denoising autoencoder(SDAE).&lt;br /&gt;
: 2016-04-17 :1.allocate PuTTy and Xming 2.learn python, can operate slice and iterator 3.learn release and datasets of a paper: Collaborative Deep Learning for Recommender Systems&lt;br /&gt;
&lt;br /&gt;
===Question &amp;amp; Answering (Aiting Liu)===&lt;br /&gt;
: 2016-04-20 : read Fader's paper ()2013 &lt;br /&gt;
: 2016-04-15 :learn dssm and sent2vec&lt;br /&gt;
: 2016-04-16 :try to figure out how thePARALAX dataset is constructed&lt;br /&gt;
: 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2016-04-22T03:10:42Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Text Processing Team Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
===Former Members===&lt;br /&gt;
* Rong Liu (刘荣) : 优酷&lt;br /&gt;
* Xiaoxi Wang (王晓曦) : 图灵机器人&lt;br /&gt;
* Xi Ma (马习) : 清华大学研究生&lt;br /&gt;
* DongXu Zhang (张东旭) : --&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
* Tianyi Luo (骆天一)&lt;br /&gt;
* Chao Xing (邢超)&lt;br /&gt;
* Qixin Wang (王琪鑫)&lt;br /&gt;
* Yiqiao Pan (潘一桥)&lt;br /&gt;
&lt;br /&gt;
==Work Process==&lt;br /&gt;
===Similar questions senetence vector model training with RNN/LSTM and the attention RNN/LSTM chatting model training (Tianyi Luo)===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution])&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
&lt;br /&gt;
===Reproduce DSSM Baseline (Chao Xing)===&lt;br /&gt;
: 2016-04-21 : True DSSM model doesn't work well, analysis as below:&lt;br /&gt;
               1. Not exactly reproduce DSSM model, because the original one is English version, I just adapt it to Chinese but after word segmentation. &lt;br /&gt;
                  So the input is tri-gram words not tri-gram letter.&lt;br /&gt;
               2. Our dataset far from rich, because of we do not use pre-trained word vectors as initial vectors, we can hardly achieve good performance.&lt;br /&gt;
             : Request :&lt;br /&gt;
               1. As we have rich pre-trained word vectors, maybe CDSSM or RDSSM corrected to our task.&lt;br /&gt;
               2. Different length of sequences seek to be fixed dimension vectors, just CNN and RNN can do such things, DNN can not do it by using &lt;br /&gt;
                  fix length of word vectors&lt;br /&gt;
             : Coding done CDSSM. Test for it's performance.&lt;br /&gt;
               One problem : When you install tensorflow by pip 0.8.0 and you want to use conv2d function by gpu, you need make sure you had already &lt;br /&gt;
                             install your cudnn's version as 4.0 not lastest 5.0.&lt;br /&gt;
: 2016-04-20 : Find reproduced DSSM model's bug, fix it.&lt;br /&gt;
: 2016-04-19 : Code mixture data model by less memory dependency done. Test it's performance.&lt;br /&gt;
: 2016-04-18 : Code mixture data model.&lt;br /&gt;
: 2016-04-16 : Code mixture data model, but face to memory error. Dr. Wang help me fix it.&lt;br /&gt;
: 2016-04-15 : Share Papers. Investigation a series of DSSM papers for future work. And show our intern students how to do research.&lt;br /&gt;
             : Original DSSM model : Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/2013_-_Learning_Deep_Structured_Semantic_Models_for_Web_Search_using_Clickthrough_Data_-_Report.pdf pdf]&lt;br /&gt;
             : CNN based DSSM model : A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b7/2014_-_A_Latent_Semantic_Model_with_Convolutional-Pooling_Structure_for_Information_Retrieval_-_Report.pdf pdf]&lt;br /&gt;
             : Use DSSM model for a new area : Modeling Interestingness with Deep Neural Networks [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/1f/2014_-_Modeling_Interestingness_with_Deep_Neural_Networks_-_Report.pdf pdf]&lt;br /&gt;
             : Latest approach for LSTM + RNN DSSM model : SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/24/2015_-_SEMANTIC_MODELLING_WITH_LONG-SHORT-TERM_MEMORY_FOR_INFORMATION_RETRIEVAL_-_Report.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
: 2016-04-14 : Test dssm-dnn model, code dssm-cnn model.&lt;br /&gt;
               Continue investigate deep neural question answering system.&lt;br /&gt;
: 2016-04-13 : test dssm model, investigate deep neural question answering system.&lt;br /&gt;
             : Share theano ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Theano-RBM.pptx theano]&lt;br /&gt;
             : Share tensorflow ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow.pptx tensorflow]&lt;br /&gt;
: 2016-04-12 : Write done dssm tensor flow version.&lt;br /&gt;
: 2016-04-11 : Write tensorflow toolkit ppt for intern student.&lt;br /&gt;
: 2016-04-10 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-09 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-08 : Finish theano version.&lt;br /&gt;
&lt;br /&gt;
===Deep Poem Processing With Image (Ziwei Bai)===&lt;br /&gt;
: 2016-04-20 :combine my program with Qixin Wang's&lt;br /&gt;
: 2016-04-10 : web spider to catch a thousand pices of images.&lt;br /&gt;
: 2016-04-13 :1、download theano for python2.7。  2.debug cnn.py&lt;br /&gt;
: 2016-04-15 :web spider to catch 30 thousands pices of images and store them into a matrix&lt;br /&gt;
: 2016-04-16 :modify the code of CNN and spider&lt;br /&gt;
: 2016-04-17 :train convouloutional neural network&lt;br /&gt;
&lt;br /&gt;
===RNN Music Processing for lyric (Shiyao Li)===&lt;br /&gt;
: 2016-04-20 : learn LSTM&lt;br /&gt;
: 2016-04-09 : web spider to catch a thousand pieces of lyrics.&lt;br /&gt;
: 2016-04-10 : extract the keywords in the lyrics&lt;br /&gt;
: 2016-04-13 :Read paper Memory Network.&lt;br /&gt;
: 2016-04-15 :read the paper Memory Network and start to understand its code&lt;br /&gt;
: 2016-04-17 :read paper end to end memory network&lt;br /&gt;
&lt;br /&gt;
===RNN Key word Poem Processing (Yi Xiong)===&lt;br /&gt;
: 2016-04-20 : learn web spider&lt;br /&gt;
: 2016-04-09 : Database for N-Gram data storing&lt;br /&gt;
: 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation&lt;br /&gt;
: 2016-04-13 : segmentation result analysis&lt;br /&gt;
: 2016-04-15 :improve the simple bigram segmentation&lt;br /&gt;
: 2016-04-16 :compare the result of bigram segmentation with dictionary segmentation&lt;br /&gt;
: 2016-04-17 :learn python (head first 50%)&lt;br /&gt;
&lt;br /&gt;
===RNN Piano Processing (Jiyuan Zhang)===&lt;br /&gt;
:2016-4-12：select appropriate  midis and run rnnrbm model&lt;br /&gt;
:2016-4-13：view  rnnrbm model‘s  code&lt;br /&gt;
&lt;br /&gt;
===Recommendation System (Tong Liu)===&lt;br /&gt;
: 2016-04-09 : 1.read a review:Machine learning:Trends,perspectives, and prospects 2.learn python ,can operate dict and set&lt;br /&gt;
: 2016-04-12 : 1.read paper Collaborative Deep Learning for Recommender Systems  and take notes.2. learn the concepts of stacked denoising autoencoder(SDAE).&lt;br /&gt;
: 2016-04-17 :1.allocate PuTTy and Xming 2.learn python, can operate slice and iterator 3.learn release and datasets of a paper: Collaborative Deep Learning for Recommender Systems&lt;br /&gt;
&lt;br /&gt;
===Question &amp;amp; Answering (Aiting Liu)===&lt;br /&gt;
: 2016-04-20 : read Fader's paper ()2013 &lt;br /&gt;
: 2016-04-15 :learn dssm and sent2vec&lt;br /&gt;
: 2016-04-16 :try to figure out how thePARALAX dataset is constructed&lt;br /&gt;
: 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2016-04-22T03:10:14Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Text Processing Team Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
===Former Members===&lt;br /&gt;
* Rong Liu (刘荣) : 优酷&lt;br /&gt;
* Xiaoxi Wang (王晓曦) : 图灵机器人&lt;br /&gt;
* Xi Ma (马习) : 清华大学研究生&lt;br /&gt;
* DongXu Zhang (张东旭) : --&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
* Tianyi Luo (骆天一)&lt;br /&gt;
* Chao Xing (邢超)&lt;br /&gt;
* Qixin Wang (王琪鑫)&lt;br /&gt;
* Yiqiao Pan (潘一桥)&lt;br /&gt;
&lt;br /&gt;
==Work Process==&lt;br /&gt;
===Similar questions senetence vector model training with RNN/LSTM and the attention RNN/LSTM chatting model training (Tianyi Luo)===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/How_to_submit_the_latex_files_including_Chinese_characters_to_arxiv Solution]&lt;br /&gt;
)&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
&lt;br /&gt;
===Reproduce DSSM Baseline (Chao Xing)===&lt;br /&gt;
: 2016-04-20 : Find reproduced DSSM model's bug, fix it.&lt;br /&gt;
: 2016-04-19 : Code mixture data model by less memory dependency done. Test it's performance.&lt;br /&gt;
: 2016-04-18 : Code mixture data model.&lt;br /&gt;
: 2016-04-16 : Code mixture data model, but face to memory error. Dr. Wang help me fix it.&lt;br /&gt;
: 2016-04-15 : Share Papers. Investigation a series of DSSM papers for future work. And show our intern students how to do research.&lt;br /&gt;
             : Original DSSM model : Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/2013_-_Learning_Deep_Structured_Semantic_Models_for_Web_Search_using_Clickthrough_Data_-_Report.pdf pdf]&lt;br /&gt;
             : CNN based DSSM model : A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b7/2014_-_A_Latent_Semantic_Model_with_Convolutional-Pooling_Structure_for_Information_Retrieval_-_Report.pdf pdf]&lt;br /&gt;
             : Use DSSM model for a new area : Modeling Interestingness with Deep Neural Networks [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/1f/2014_-_Modeling_Interestingness_with_Deep_Neural_Networks_-_Report.pdf pdf]&lt;br /&gt;
             : Latest approach for LSTM + RNN DSSM model : SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/24/2015_-_SEMANTIC_MODELLING_WITH_LONG-SHORT-TERM_MEMORY_FOR_INFORMATION_RETRIEVAL_-_Report.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
: 2016-04-14 : Test dssm-dnn model, code dssm-cnn model.&lt;br /&gt;
               Continue investigate deep neural question answering system.&lt;br /&gt;
: 2016-04-13 : test dssm model, investigate deep neural question answering system.&lt;br /&gt;
             : Share theano ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Theano-RBM.pptx theano]&lt;br /&gt;
             : Share tensorflow ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow.pptx tensorflow]&lt;br /&gt;
: 2016-04-12 : Write done dssm tensor flow version.&lt;br /&gt;
: 2016-04-11 : Write tensorflow toolkit ppt for intern student.&lt;br /&gt;
: 2016-04-10 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-09 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-08 : Finish theano version.&lt;br /&gt;
&lt;br /&gt;
===Deep Poem Processing With Image (Ziwei Bai)===&lt;br /&gt;
: 2016-04-20 :combine my program with Qixin Wang's&lt;br /&gt;
: 2016-04-10 : web spider to catch a thousand pices of images.&lt;br /&gt;
: 2016-04-13 :1、download theano for python2.7。  2.debug cnn.py&lt;br /&gt;
: 2016-04-15 :web spider to catch 30 thousands pices of images and store them into a matrix&lt;br /&gt;
: 2016-04-16 :modify the code of CNN and spider&lt;br /&gt;
: 2016-04-17 :train convouloutional neural network&lt;br /&gt;
&lt;br /&gt;
===RNN Music Processing for lyric (Shiyao Li)===&lt;br /&gt;
: 2016-04-20 : learn LSTM&lt;br /&gt;
: 2016-04-09 : web spider to catch a thousand pieces of lyrics.&lt;br /&gt;
: 2016-04-10 : extract the keywords in the lyrics&lt;br /&gt;
: 2016-04-13 :Read paper Memory Network.&lt;br /&gt;
: 2016-04-15 :read the paper Memory Network and start to understand its code&lt;br /&gt;
: 2016-04-17 :read paper end to end memory network&lt;br /&gt;
&lt;br /&gt;
===RNN Key word Poem Processing (Yi Xiong)===&lt;br /&gt;
: 2016-04-20 : learn web spider&lt;br /&gt;
: 2016-04-09 : Database for N-Gram data storing&lt;br /&gt;
: 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation&lt;br /&gt;
: 2016-04-13 : segmentation result analysis&lt;br /&gt;
: 2016-04-15 :improve the simple bigram segmentation&lt;br /&gt;
: 2016-04-16 :compare the result of bigram segmentation with dictionary segmentation&lt;br /&gt;
: 2016-04-17 :learn python (head first 50%)&lt;br /&gt;
&lt;br /&gt;
===RNN Piano Processing (Jiyuan Zhang)===&lt;br /&gt;
:2016-4-12：select appropriate  midis and run rnnrbm model&lt;br /&gt;
:2016-4-13：view  rnnrbm model‘s  code&lt;br /&gt;
&lt;br /&gt;
===Recommendation System (Tong Liu)===&lt;br /&gt;
: 2016-04-09 : 1.read a review:Machine learning:Trends,perspectives, and prospects 2.learn python ,can operate dict and set&lt;br /&gt;
: 2016-04-12 : 1.read paper Collaborative Deep Learning for Recommender Systems  and take notes.2. learn the concepts of stacked denoising autoencoder(SDAE).&lt;br /&gt;
: 2016-04-17 :1.allocate PuTTy and Xming 2.learn python, can operate slice and iterator 3.learn release and datasets of a paper: Collaborative Deep Learning for Recommender Systems&lt;br /&gt;
&lt;br /&gt;
===Question &amp;amp; Answering (Aiting Liu)===&lt;br /&gt;
: 2016-04-20 : read Fader's paper ()2013 &lt;br /&gt;
: 2016-04-15 :learn dssm and sent2vec&lt;br /&gt;
: 2016-04-16 :try to figure out how thePARALAX dataset is constructed&lt;br /&gt;
: 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2016-04-22T03:06:48Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Text Processing Team Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
===Former Members===&lt;br /&gt;
* Rong Liu (刘荣) : 优酷&lt;br /&gt;
* Xiaoxi Wang (王晓曦) : 图灵机器人&lt;br /&gt;
* Xi Ma (马习) : 清华大学研究生&lt;br /&gt;
* DongXu Zhang (张东旭) : --&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
* Tianyi Luo (骆天一)&lt;br /&gt;
* Chao Xing (邢超)&lt;br /&gt;
* Qixin Wang (王琪鑫)&lt;br /&gt;
* Yiqiao Pan (潘一桥)&lt;br /&gt;
&lt;br /&gt;
==Work Process==&lt;br /&gt;
===Similar questions senetence vector model training with RNN/LSTM and the attention RNN/LSTM chatting model training (Tianyi Luo)===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters)&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
&lt;br /&gt;
===Reproduce DSSM Baseline (Chao Xing)===&lt;br /&gt;
: 2016-04-20 : Find reproduced DSSM model's bug, fix it.&lt;br /&gt;
: 2016-04-19 : Code mixture data model by less memory dependency done. Test it's performance.&lt;br /&gt;
: 2016-04-18 : Code mixture data model.&lt;br /&gt;
: 2016-04-16 : Code mixture data model, but face to memory error. Dr. Wang help me fix it.&lt;br /&gt;
: 2016-04-15 : Share Papers. Investigation a series of DSSM papers for future work. And show our intern students how to do research.&lt;br /&gt;
             : Original DSSM model : Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/2013_-_Learning_Deep_Structured_Semantic_Models_for_Web_Search_using_Clickthrough_Data_-_Report.pdf pdf]&lt;br /&gt;
             : CNN based DSSM model : A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b7/2014_-_A_Latent_Semantic_Model_with_Convolutional-Pooling_Structure_for_Information_Retrieval_-_Report.pdf pdf]&lt;br /&gt;
             : Use DSSM model for a new area : Modeling Interestingness with Deep Neural Networks [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/1f/2014_-_Modeling_Interestingness_with_Deep_Neural_Networks_-_Report.pdf pdf]&lt;br /&gt;
             : Latest approach for LSTM + RNN DSSM model : SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/24/2015_-_SEMANTIC_MODELLING_WITH_LONG-SHORT-TERM_MEMORY_FOR_INFORMATION_RETRIEVAL_-_Report.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
: 2016-04-14 : Test dssm-dnn model, code dssm-cnn model.&lt;br /&gt;
               Continue investigate deep neural question answering system.&lt;br /&gt;
: 2016-04-13 : test dssm model, investigate deep neural question answering system.&lt;br /&gt;
             : Share theano ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Theano-RBM.pptx theano]&lt;br /&gt;
             : Share tensorflow ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow.pptx tensorflow]&lt;br /&gt;
: 2016-04-12 : Write done dssm tensor flow version.&lt;br /&gt;
: 2016-04-11 : Write tensorflow toolkit ppt for intern student.&lt;br /&gt;
: 2016-04-10 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-09 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-08 : Finish theano version.&lt;br /&gt;
&lt;br /&gt;
===Deep Poem Processing With Image (Ziwei Bai)===&lt;br /&gt;
: 2016-04-20 :combine my program with Qixin Wang's&lt;br /&gt;
: 2016-04-10 : web spider to catch a thousand pices of images.&lt;br /&gt;
: 2016-04-13 :1、download theano for python2.7。  2.debug cnn.py&lt;br /&gt;
: 2016-04-15 :web spider to catch 30 thousands pices of images and store them into a matrix&lt;br /&gt;
: 2016-04-16 :modify the code of CNN and spider&lt;br /&gt;
: 2016-04-17 :train convouloutional neural network&lt;br /&gt;
&lt;br /&gt;
===RNN Music Processing for lyric (Shiyao Li)===&lt;br /&gt;
: 2016-04-20 : learn LSTM&lt;br /&gt;
: 2016-04-09 : web spider to catch a thousand pieces of lyrics.&lt;br /&gt;
: 2016-04-10 : extract the keywords in the lyrics&lt;br /&gt;
: 2016-04-13 :Read paper Memory Network.&lt;br /&gt;
: 2016-04-15 :read the paper Memory Network and start to understand its code&lt;br /&gt;
: 2016-04-17 :read paper end to end memory network&lt;br /&gt;
&lt;br /&gt;
===RNN Key word Poem Processing (Yi Xiong)===&lt;br /&gt;
: 2016-04-20 : learn web spider&lt;br /&gt;
: 2016-04-09 : Database for N-Gram data storing&lt;br /&gt;
: 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation&lt;br /&gt;
: 2016-04-13 : segmentation result analysis&lt;br /&gt;
: 2016-04-15 :improve the simple bigram segmentation&lt;br /&gt;
: 2016-04-16 :compare the result of bigram segmentation with dictionary segmentation&lt;br /&gt;
: 2016-04-17 :learn python (head first 50%)&lt;br /&gt;
&lt;br /&gt;
===RNN Piano Processing (Jiyuan Zhang)===&lt;br /&gt;
:2016-4-12：select appropriate  midis and run rnnrbm model&lt;br /&gt;
:2016-4-13：view  rnnrbm model‘s  code&lt;br /&gt;
&lt;br /&gt;
===Recommendation System (Tong Liu)===&lt;br /&gt;
: 2016-04-09 : 1.read a review:Machine learning:Trends,perspectives, and prospects 2.learn python ,can operate dict and set&lt;br /&gt;
: 2016-04-12 : 1.read paper Collaborative Deep Learning for Recommender Systems  and take notes.2. learn the concepts of stacked denoising autoencoder(SDAE).&lt;br /&gt;
: 2016-04-17 :1.allocate PuTTy and Xming 2.learn python, can operate slice and iterator 3.learn release and datasets of a paper: Collaborative Deep Learning for Recommender Systems&lt;br /&gt;
&lt;br /&gt;
===Question &amp;amp; Answering (Aiting Liu)===&lt;br /&gt;
: 2016-04-20 : read Fader's paper ()2013 &lt;br /&gt;
: 2016-04-15 :learn dssm and sent2vec&lt;br /&gt;
: 2016-04-16 :try to figure out how thePARALAX dataset is constructed&lt;br /&gt;
: 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2016-04-22T03:06:25Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Text Processing Team Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
===Former Members===&lt;br /&gt;
* Rong Liu (刘荣) : 优酷&lt;br /&gt;
* Xiaoxi Wang (王晓曦) : 图灵机器人&lt;br /&gt;
* Xi Ma (马习) : 清华大学研究生&lt;br /&gt;
* DongXu Zhang (张东旭) : --&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
* Tianyi Luo (骆天一)&lt;br /&gt;
* Chao Xing (邢超)&lt;br /&gt;
* Qixin Wang (王琪鑫)&lt;br /&gt;
* Yiqiao Pan (潘一桥)&lt;br /&gt;
&lt;br /&gt;
==Work Process==&lt;br /&gt;
===Similar questions senetence vector training with RNN/LSTM and the attention RNN/LSTM chatting model training(Tianyi Luo)===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters)&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
&lt;br /&gt;
===Reproduce DSSM Baseline (Chao Xing)===&lt;br /&gt;
: 2016-04-20 : Find reproduced DSSM model's bug, fix it.&lt;br /&gt;
: 2016-04-19 : Code mixture data model by less memory dependency done. Test it's performance.&lt;br /&gt;
: 2016-04-18 : Code mixture data model.&lt;br /&gt;
: 2016-04-16 : Code mixture data model, but face to memory error. Dr. Wang help me fix it.&lt;br /&gt;
: 2016-04-15 : Share Papers. Investigation a series of DSSM papers for future work. And show our intern students how to do research.&lt;br /&gt;
             : Original DSSM model : Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/2013_-_Learning_Deep_Structured_Semantic_Models_for_Web_Search_using_Clickthrough_Data_-_Report.pdf pdf]&lt;br /&gt;
             : CNN based DSSM model : A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b7/2014_-_A_Latent_Semantic_Model_with_Convolutional-Pooling_Structure_for_Information_Retrieval_-_Report.pdf pdf]&lt;br /&gt;
             : Use DSSM model for a new area : Modeling Interestingness with Deep Neural Networks [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/1f/2014_-_Modeling_Interestingness_with_Deep_Neural_Networks_-_Report.pdf pdf]&lt;br /&gt;
             : Latest approach for LSTM + RNN DSSM model : SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/24/2015_-_SEMANTIC_MODELLING_WITH_LONG-SHORT-TERM_MEMORY_FOR_INFORMATION_RETRIEVAL_-_Report.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
: 2016-04-14 : Test dssm-dnn model, code dssm-cnn model.&lt;br /&gt;
               Continue investigate deep neural question answering system.&lt;br /&gt;
: 2016-04-13 : test dssm model, investigate deep neural question answering system.&lt;br /&gt;
             : Share theano ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Theano-RBM.pptx theano]&lt;br /&gt;
             : Share tensorflow ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow.pptx tensorflow]&lt;br /&gt;
: 2016-04-12 : Write done dssm tensor flow version.&lt;br /&gt;
: 2016-04-11 : Write tensorflow toolkit ppt for intern student.&lt;br /&gt;
: 2016-04-10 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-09 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-08 : Finish theano version.&lt;br /&gt;
&lt;br /&gt;
===Deep Poem Processing With Image (Ziwei Bai)===&lt;br /&gt;
: 2016-04-20 :combine my program with Qixin Wang's&lt;br /&gt;
: 2016-04-10 : web spider to catch a thousand pices of images.&lt;br /&gt;
: 2016-04-13 :1、download theano for python2.7。  2.debug cnn.py&lt;br /&gt;
: 2016-04-15 :web spider to catch 30 thousands pices of images and store them into a matrix&lt;br /&gt;
: 2016-04-16 :modify the code of CNN and spider&lt;br /&gt;
: 2016-04-17 :train convouloutional neural network&lt;br /&gt;
&lt;br /&gt;
===RNN Music Processing for lyric (Shiyao Li)===&lt;br /&gt;
: 2016-04-20 : learn LSTM&lt;br /&gt;
: 2016-04-09 : web spider to catch a thousand pieces of lyrics.&lt;br /&gt;
: 2016-04-10 : extract the keywords in the lyrics&lt;br /&gt;
: 2016-04-13 :Read paper Memory Network.&lt;br /&gt;
: 2016-04-15 :read the paper Memory Network and start to understand its code&lt;br /&gt;
: 2016-04-17 :read paper end to end memory network&lt;br /&gt;
&lt;br /&gt;
===RNN Key word Poem Processing (Yi Xiong)===&lt;br /&gt;
: 2016-04-20 : learn web spider&lt;br /&gt;
: 2016-04-09 : Database for N-Gram data storing&lt;br /&gt;
: 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation&lt;br /&gt;
: 2016-04-13 : segmentation result analysis&lt;br /&gt;
: 2016-04-15 :improve the simple bigram segmentation&lt;br /&gt;
: 2016-04-16 :compare the result of bigram segmentation with dictionary segmentation&lt;br /&gt;
: 2016-04-17 :learn python (head first 50%)&lt;br /&gt;
&lt;br /&gt;
===RNN Piano Processing (Jiyuan Zhang)===&lt;br /&gt;
:2016-4-12：select appropriate  midis and run rnnrbm model&lt;br /&gt;
:2016-4-13：view  rnnrbm model‘s  code&lt;br /&gt;
&lt;br /&gt;
===Recommendation System (Tong Liu)===&lt;br /&gt;
: 2016-04-09 : 1.read a review:Machine learning:Trends,perspectives, and prospects 2.learn python ,can operate dict and set&lt;br /&gt;
: 2016-04-12 : 1.read paper Collaborative Deep Learning for Recommender Systems  and take notes.2. learn the concepts of stacked denoising autoencoder(SDAE).&lt;br /&gt;
: 2016-04-17 :1.allocate PuTTy and Xming 2.learn python, can operate slice and iterator 3.learn release and datasets of a paper: Collaborative Deep Learning for Recommender Systems&lt;br /&gt;
&lt;br /&gt;
===Question &amp;amp; Answering (Aiting Liu)===&lt;br /&gt;
: 2016-04-20 : read Fader's paper ()2013 &lt;br /&gt;
: 2016-04-15 :learn dssm and sent2vec&lt;br /&gt;
: 2016-04-16 :try to figure out how thePARALAX dataset is constructed&lt;br /&gt;
: 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2016-04-22T03:05:59Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Text Processing Team Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
===Former Members===&lt;br /&gt;
* Rong Liu (刘荣) : 优酷&lt;br /&gt;
* Xiaoxi Wang (王晓曦) : 图灵机器人&lt;br /&gt;
* Xi Ma (马习) : 清华大学研究生&lt;br /&gt;
* DongXu Zhang (张东旭) : --&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
* Tianyi Luo (骆天一)&lt;br /&gt;
* Chao Xing (邢超)&lt;br /&gt;
* Qixin Wang (王琪鑫)&lt;br /&gt;
* Yiqiao Pan (潘一桥)&lt;br /&gt;
&lt;br /&gt;
==Work Process==&lt;br /&gt;
===Similar questions senetence vector training with RNN/LSTM and train the attention RNN/LSTM chatting model with corpus of Bing Xiao(Tianyi Luo)===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters)&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
&lt;br /&gt;
===Reproduce DSSM Baseline (Chao Xing)===&lt;br /&gt;
: 2016-04-20 : Find reproduced DSSM model's bug, fix it.&lt;br /&gt;
: 2016-04-19 : Code mixture data model by less memory dependency done. Test it's performance.&lt;br /&gt;
: 2016-04-18 : Code mixture data model.&lt;br /&gt;
: 2016-04-16 : Code mixture data model, but face to memory error. Dr. Wang help me fix it.&lt;br /&gt;
: 2016-04-15 : Share Papers. Investigation a series of DSSM papers for future work. And show our intern students how to do research.&lt;br /&gt;
             : Original DSSM model : Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/2013_-_Learning_Deep_Structured_Semantic_Models_for_Web_Search_using_Clickthrough_Data_-_Report.pdf pdf]&lt;br /&gt;
             : CNN based DSSM model : A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b7/2014_-_A_Latent_Semantic_Model_with_Convolutional-Pooling_Structure_for_Information_Retrieval_-_Report.pdf pdf]&lt;br /&gt;
             : Use DSSM model for a new area : Modeling Interestingness with Deep Neural Networks [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/1f/2014_-_Modeling_Interestingness_with_Deep_Neural_Networks_-_Report.pdf pdf]&lt;br /&gt;
             : Latest approach for LSTM + RNN DSSM model : SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/24/2015_-_SEMANTIC_MODELLING_WITH_LONG-SHORT-TERM_MEMORY_FOR_INFORMATION_RETRIEVAL_-_Report.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
: 2016-04-14 : Test dssm-dnn model, code dssm-cnn model.&lt;br /&gt;
               Continue investigate deep neural question answering system.&lt;br /&gt;
: 2016-04-13 : test dssm model, investigate deep neural question answering system.&lt;br /&gt;
             : Share theano ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Theano-RBM.pptx theano]&lt;br /&gt;
             : Share tensorflow ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow.pptx tensorflow]&lt;br /&gt;
: 2016-04-12 : Write done dssm tensor flow version.&lt;br /&gt;
: 2016-04-11 : Write tensorflow toolkit ppt for intern student.&lt;br /&gt;
: 2016-04-10 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-09 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-08 : Finish theano version.&lt;br /&gt;
&lt;br /&gt;
===Deep Poem Processing With Image (Ziwei Bai)===&lt;br /&gt;
: 2016-04-20 :combine my program with Qixin Wang's&lt;br /&gt;
: 2016-04-10 : web spider to catch a thousand pices of images.&lt;br /&gt;
: 2016-04-13 :1、download theano for python2.7。  2.debug cnn.py&lt;br /&gt;
: 2016-04-15 :web spider to catch 30 thousands pices of images and store them into a matrix&lt;br /&gt;
: 2016-04-16 :modify the code of CNN and spider&lt;br /&gt;
: 2016-04-17 :train convouloutional neural network&lt;br /&gt;
&lt;br /&gt;
===RNN Music Processing for lyric (Shiyao Li)===&lt;br /&gt;
: 2016-04-20 : learn LSTM&lt;br /&gt;
: 2016-04-09 : web spider to catch a thousand pieces of lyrics.&lt;br /&gt;
: 2016-04-10 : extract the keywords in the lyrics&lt;br /&gt;
: 2016-04-13 :Read paper Memory Network.&lt;br /&gt;
: 2016-04-15 :read the paper Memory Network and start to understand its code&lt;br /&gt;
: 2016-04-17 :read paper end to end memory network&lt;br /&gt;
&lt;br /&gt;
===RNN Key word Poem Processing (Yi Xiong)===&lt;br /&gt;
: 2016-04-20 : learn web spider&lt;br /&gt;
: 2016-04-09 : Database for N-Gram data storing&lt;br /&gt;
: 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation&lt;br /&gt;
: 2016-04-13 : segmentation result analysis&lt;br /&gt;
: 2016-04-15 :improve the simple bigram segmentation&lt;br /&gt;
: 2016-04-16 :compare the result of bigram segmentation with dictionary segmentation&lt;br /&gt;
: 2016-04-17 :learn python (head first 50%)&lt;br /&gt;
&lt;br /&gt;
===RNN Piano Processing (Jiyuan Zhang)===&lt;br /&gt;
:2016-4-12：select appropriate  midis and run rnnrbm model&lt;br /&gt;
:2016-4-13：view  rnnrbm model‘s  code&lt;br /&gt;
&lt;br /&gt;
===Recommendation System (Tong Liu)===&lt;br /&gt;
: 2016-04-09 : 1.read a review:Machine learning:Trends,perspectives, and prospects 2.learn python ,can operate dict and set&lt;br /&gt;
: 2016-04-12 : 1.read paper Collaborative Deep Learning for Recommender Systems  and take notes.2. learn the concepts of stacked denoising autoencoder(SDAE).&lt;br /&gt;
: 2016-04-17 :1.allocate PuTTy and Xming 2.learn python, can operate slice and iterator 3.learn release and datasets of a paper: Collaborative Deep Learning for Recommender Systems&lt;br /&gt;
&lt;br /&gt;
===Question &amp;amp; Answering (Aiting Liu)===&lt;br /&gt;
: 2016-04-20 : read Fader's paper ()2013 &lt;br /&gt;
: 2016-04-15 :learn dssm and sent2vec&lt;br /&gt;
: 2016-04-16 :try to figure out how thePARALAX dataset is constructed&lt;br /&gt;
: 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2016-04-22T03:04:55Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Text Processing Team Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
===Former Members===&lt;br /&gt;
* Rong Liu (刘荣) : 优酷&lt;br /&gt;
* Xiaoxi Wang (王晓曦) : 图灵机器人&lt;br /&gt;
* Xi Ma (马习) : 清华大学研究生&lt;br /&gt;
* DongXu Zhang (张东旭) : --&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
* Tianyi Luo (骆天一)&lt;br /&gt;
* Chao Xing (邢超)&lt;br /&gt;
* Qixin Wang (王琪鑫)&lt;br /&gt;
* Yiqiao Pan (潘一桥)&lt;br /&gt;
&lt;br /&gt;
==Work Process==&lt;br /&gt;
===Similar questions senetence vector training with RNN/LSTM(Tianyi Luo)===&lt;br /&gt;
--------------------2016-04-18&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
* Finish implementing theano version of LSTM Max margin vector training.&lt;br /&gt;
--------------------2016-04-19&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
--------------------2016-04-20&lt;br /&gt;
* Finish submiting the camera version paper of IJCAI 2016.&lt;br /&gt;
* Update the version of Technical Report about Chinese Song Iambics generation.&lt;br /&gt;
--------------------2016-04-21&lt;br /&gt;
* Finish helping Teacher Wang to prepare for text group's presentation(Tang poetry and Songci generation and Intelligent QA system) for Tsinghua University's 105 anniversary.&lt;br /&gt;
* Submit our IJCAI paper to arxiv. (Solve a big problem about submitting the paper including Chinese chacracters)&lt;br /&gt;
* Optimize theano version of Generationg the similar questions' vectors based on RNN.&lt;br /&gt;
&lt;br /&gt;
===Reproduce DSSM Baseline (Chao Xing)===&lt;br /&gt;
: 2016-04-20 : Find reproduced DSSM model's bug, fix it.&lt;br /&gt;
: 2016-04-19 : Code mixture data model by less memory dependency done. Test it's performance.&lt;br /&gt;
: 2016-04-18 : Code mixture data model.&lt;br /&gt;
: 2016-04-16 : Code mixture data model, but face to memory error. Dr. Wang help me fix it.&lt;br /&gt;
: 2016-04-15 : Share Papers. Investigation a series of DSSM papers for future work. And show our intern students how to do research.&lt;br /&gt;
             : Original DSSM model : Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/2013_-_Learning_Deep_Structured_Semantic_Models_for_Web_Search_using_Clickthrough_Data_-_Report.pdf pdf]&lt;br /&gt;
             : CNN based DSSM model : A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b7/2014_-_A_Latent_Semantic_Model_with_Convolutional-Pooling_Structure_for_Information_Retrieval_-_Report.pdf pdf]&lt;br /&gt;
             : Use DSSM model for a new area : Modeling Interestingness with Deep Neural Networks [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/1f/2014_-_Modeling_Interestingness_with_Deep_Neural_Networks_-_Report.pdf pdf]&lt;br /&gt;
             : Latest approach for LSTM + RNN DSSM model : SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/24/2015_-_SEMANTIC_MODELLING_WITH_LONG-SHORT-TERM_MEMORY_FOR_INFORMATION_RETRIEVAL_-_Report.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
: 2016-04-14 : Test dssm-dnn model, code dssm-cnn model.&lt;br /&gt;
               Continue investigate deep neural question answering system.&lt;br /&gt;
: 2016-04-13 : test dssm model, investigate deep neural question answering system.&lt;br /&gt;
             : Share theano ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Theano-RBM.pptx theano]&lt;br /&gt;
             : Share tensorflow ppt [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow.pptx tensorflow]&lt;br /&gt;
: 2016-04-12 : Write done dssm tensor flow version.&lt;br /&gt;
: 2016-04-11 : Write tensorflow toolkit ppt for intern student.&lt;br /&gt;
: 2016-04-10 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-09 : Learn tensorflow toolkit.&lt;br /&gt;
: 2016-04-08 : Finish theano version.&lt;br /&gt;
&lt;br /&gt;
===Deep Poem Processing With Image (Ziwei Bai)===&lt;br /&gt;
: 2016-04-20 :combine my program with Qixin Wang's&lt;br /&gt;
: 2016-04-10 : web spider to catch a thousand pices of images.&lt;br /&gt;
: 2016-04-13 :1、download theano for python2.7。  2.debug cnn.py&lt;br /&gt;
: 2016-04-15 :web spider to catch 30 thousands pices of images and store them into a matrix&lt;br /&gt;
: 2016-04-16 :modify the code of CNN and spider&lt;br /&gt;
: 2016-04-17 :train convouloutional neural network&lt;br /&gt;
&lt;br /&gt;
===RNN Music Processing for lyric (Shiyao Li)===&lt;br /&gt;
: 2016-04-20 : learn LSTM&lt;br /&gt;
: 2016-04-09 : web spider to catch a thousand pieces of lyrics.&lt;br /&gt;
: 2016-04-10 : extract the keywords in the lyrics&lt;br /&gt;
: 2016-04-13 :Read paper Memory Network.&lt;br /&gt;
: 2016-04-15 :read the paper Memory Network and start to understand its code&lt;br /&gt;
: 2016-04-17 :read paper end to end memory network&lt;br /&gt;
&lt;br /&gt;
===RNN Key word Poem Processing (Yi Xiong)===&lt;br /&gt;
: 2016-04-20 : learn web spider&lt;br /&gt;
: 2016-04-09 : Database for N-Gram data storing&lt;br /&gt;
: 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation&lt;br /&gt;
: 2016-04-13 : segmentation result analysis&lt;br /&gt;
: 2016-04-15 :improve the simple bigram segmentation&lt;br /&gt;
: 2016-04-16 :compare the result of bigram segmentation with dictionary segmentation&lt;br /&gt;
: 2016-04-17 :learn python (head first 50%)&lt;br /&gt;
&lt;br /&gt;
===RNN Piano Processing (Jiyuan Zhang)===&lt;br /&gt;
:2016-4-12：select appropriate  midis and run rnnrbm model&lt;br /&gt;
:2016-4-13：view  rnnrbm model‘s  code&lt;br /&gt;
&lt;br /&gt;
===Recommendation System (Tong Liu)===&lt;br /&gt;
: 2016-04-09 : 1.read a review:Machine learning:Trends,perspectives, and prospects 2.learn python ,can operate dict and set&lt;br /&gt;
: 2016-04-12 : 1.read paper Collaborative Deep Learning for Recommender Systems  and take notes.2. learn the concepts of stacked denoising autoencoder(SDAE).&lt;br /&gt;
: 2016-04-17 :1.allocate PuTTy and Xming 2.learn python, can operate slice and iterator 3.learn release and datasets of a paper: Collaborative Deep Learning for Recommender Systems&lt;br /&gt;
&lt;br /&gt;
===Question &amp;amp; Answering (Aiting Liu)===&lt;br /&gt;
: 2016-04-20 : read Fader's paper ()2013 &lt;br /&gt;
: 2016-04-15 :learn dssm and sent2vec&lt;br /&gt;
: 2016-04-16 :try to figure out how thePARALAX dataset is constructed&lt;br /&gt;
: 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Publication-trp</id>
		<title>Publication-trp</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Publication-trp"/>
				<updated>2016-04-22T02:55:20Z</updated>
		
		<summary type="html">&lt;p&gt;Lty：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[文件:Aikefu.bmp|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template.pdf|TRP-20160004: A Review of Neural QA, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:simpair.png|200px]]&lt;br /&gt;
*[[媒体文件:Trp20160003.pdf|TRP-20160003: A study of Similar Word Model for Unfrequent Word Enhancement in Speech Recognition, Xi Ma, Dong Wang and Javier Tejedor]]&lt;br /&gt;
&lt;br /&gt;
[[文件:low-freq.png|200px]]&lt;br /&gt;
*[[媒体文件:How to deal with low frequency words.pdf|TRP-20160002: Low-Frequency Words Embedding, Chao Xing, Yiqiao Pan, Dong Wang]]&lt;br /&gt;
[[文件:maxmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:Max-margin.pdf|TRP-20160001: Max-margin metric learning for speaker recognition, Lantian Li, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:lowv.png|200px]]&lt;br /&gt;
*[[媒体文件:Lowv.pdf|TRP-20150033: Learning Ordered Word Representations, Xiaoxi Wang, Chao Xing, Dong Wang, Rong Liu and Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Adamax.png|200px]]&lt;br /&gt;
*[[媒体文件:Adamax Online Training for Speech Recognition.pdf|TRP-20150032: Adamax Online Training for Speech Recognition, Xiangyu Zeng, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Ptrnets.png|200px]]&lt;br /&gt;
*[[媒体文件: Ptrnets.pdf|TRP-20150031: An implementation of Pointer-Networks with Extensions, Xiaoxi Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dvad.png|200px]]&lt;br /&gt;
*[[媒体文件:dvad.pdf|TRP-20150030: DNN-based Voice Activity Detection for Speaker Recognition, Fanhu Bie, Zhiyong Zhang, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:uyghur.jpg|200px]]&lt;br /&gt;
*[[媒体文件:urghur.pdf|TRP-20150029: THUYG-20：A Free Uyghur Speech Database, Askar Rozi, Shi Yin, Zhiyong Zhang, Dong Wang,  Askar Hamdulla]]&lt;br /&gt;
&lt;br /&gt;
[[文件:nnpre.jpg|200px]]&lt;br /&gt;
*[[媒体文件:nnpre.pdf|TRP-20150028: Knowledge Transfer Pre-training, Zhiyuang Tang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:mmargin.png|200px]]&lt;br /&gt;
*[[媒体文件:mmargin.pdf|TRP-20150027: Max-Margin Metric Learning for Speaker Recognition, Lantian Li, Chao Xing, Dong Wang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:binary.jpg|200px]]&lt;br /&gt;
*[[媒体文件:binary.pdf|TRP-20150026: Binary Speaker Embedding, Lantian Li, Chao Xing, Dong Wang, Kaimin Yu, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:rnnrl.png|200px]]&lt;br /&gt;
*[[媒体文件:rnnrl.pdf|TRP-20150025: Relation Classification via Recurrent Neural Network, Dong Xu Zhang, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:dplda.png|200px]]&lt;br /&gt;
*[[媒体文件:dplda.pdf|TRP-20150024: Learning from LDA using Deep Neural Networks, Dongxu Zhang, Tianyi Luo and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:jsrl.png|200px]]&lt;br /&gt;
*[[媒体文件:jsrl.pdf|TRP-20150023: Joint Semantic Relevance Learning with Text Data and Graph Knowledge, Dongxu Zhang, Bin Yuan, Dong Wang, Rong Liu]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:listnet.png|200px]]&lt;br /&gt;
*[[媒体文件:listnet.pdf|TRP-20150022: Stochastic Top-k ListNet, Tianyi Luo, Dong Wang, Rong Liu, Yiqiao Pan]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:segvector.png|200px]]&lt;br /&gt;
*[[媒体文件:segvector.pdf|TRP-20150021: Improved Deep Speaker Feature Learning for Text-Dependent Speaker Recognition, Lantian Li, Yiye Lin, Zhiyong Zhang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Vmclass.png|200px]]&lt;br /&gt;
*[[媒体文件:Vmclass.pdf|TRP-20150020: Document Classification with Spherical Word Vectors, Yiqiao Pan, Chao Xing, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Tlearn.png|200px]]&lt;br /&gt;
*[[媒体文件:Tlearn.pdf|TRP-20150019: Transfer Learning for Speech and Language Processing, Dong Wang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Songcisample.png|200px]]&lt;br /&gt;
*[[媒体文件:Songci.pdf|TRP-20150018: Chinese Song Iambics Generation with Neural Attention-based Model, Qixin Wang, Tianyi Luo, Dong Wang, Chao Xing]]&lt;br /&gt;
&lt;br /&gt;
[[文件:database.jpg|200px]]&lt;br /&gt;
*[[媒体文件:Thuyg20-sre.pdf|TRP-20150017: AN OPEN/FREE DATABASE AND BENCHMARK FOR UYGHUR SPEAKER RECOGNITION, Askar Rozi, Dong Wang, Zhiyong Zhang, Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Thchs.png|200px]]&lt;br /&gt;
*[[媒体文件:Thchs30.pdf|TRP-20150016: THCHS-30 : A Free Chinese Speech Corpus, Dong Wang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:Su.jpg|200px]]&lt;br /&gt;
*[[媒体文件:SUSR.pdf|TRP-20150015: Improving Short Utterance Speaker Recognition by Modeling Speech Unit Classes, Chenhao Zhang Dong Wang, Lantian Li and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dv.png|200px]]&lt;br /&gt;
*[[媒体文件:Dvector.pdf|TRP-20150014: Deep Speaker Vectors for Semi Text-independent Speaker Verification, Lantian Li, Dong Wang, Zhiyong Zhang and Thomas Fang Zheng]]&lt;br /&gt;
&lt;br /&gt;
[[文件:dark.png|200px]]&lt;br /&gt;
*[[媒体文件:Dark.pdf|TRP-20150013: Recurrent Neural Network Training with Dark Knowledge Transfer, Dong Wang, Chao Liu, Zhiyuan Tang, Zhiyong Zhang, Mengyuan Zhao]]&lt;br /&gt;
&lt;br /&gt;
[[文件:PBE.png|200px]]&lt;br /&gt;
*[[媒体文件:Probabilistic_Belief_Embedding_for_Knowledge_Population_(TRP).pdf|TRP-20150012: Probabilistic Belief Embedding for Large-scale Knowledge Population. Miao Fan, Qiang Zhou, Andrew Abel, Thomas Fang Zheng and Ralph Grishman]]&lt;br /&gt;
&lt;br /&gt;
[[文件:fst-fw.png|200px]]&lt;br /&gt;
*[[媒体文件:wpair.pdf|TRP-20150011: Recognize Foreign Low-Frequency Words with Similar Pairs, Xi Ma1, Xiaoxi Wang1 and Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Cdae.png|200px]]&lt;br /&gt;
*[[媒体文件:Music.pdf|TRP-20150010: Music Removal by Denoising Autoencoder in Speech Recognition. Mengyuan Zhao, Dong Wang, Zhiyong Zhang and Xuewei Zhang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:vmfsne.png|200px]]&lt;br /&gt;
*[[媒体文件:Cslt-trp-template-vmfsne.pdf|TRP-20150009: VMF-SNE: Embedding for Spherical Data. Mian Wang, Dong Wang]]&lt;br /&gt;
&lt;br /&gt;
[[文件:ros.png|200px]]&lt;br /&gt;
*[[媒体文件:Ros.pdf|TRP-20150008: Learning Speech Rate in Speech Recognition. Xiangyu Zeng, Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Dnnvadstru.png|200px]]&lt;br /&gt;
*[[媒体文件:DNNVADTRP.pdf|TRP-20150007: Voice Activity Detection Based on Deep Neural Networks. Shi Yin.]] ([[媒体文件:Vad.pdf|Paper submiited to Tsinghua Xuebao]])&lt;br /&gt;
&lt;br /&gt;
[[文件:Uyghur-training.png|200px]]&lt;br /&gt;
*[[媒体文件:UyghurTRP.pdf|TRP-20150006: Low-resource Uyghur Acoustic Model Training based on Cross-lingual Features. Shi Yin.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Beam-forming.png|200px]]&lt;br /&gt;
*[[媒体文件:Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition.pdf|TRP-20150005: Multi-Microphones_Reverberation_Cancellation_for_Distant_Speech_Recognition. Xuewei Zhang.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Clipping-speaker.png|200px]]&lt;br /&gt;
*[[媒体文件:Clip.pdf|TRP-20150004: Detection and Reconstruction of Clipped Speech in Speaker Recognition. Fanhu Bie et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Semi-dynamic-embedding.png|200px]]&lt;br /&gt;
*[[媒体文件:Taglm.pdf|TRP-20150003: Semi-Dynamic Graph Embedding for Large Scale Language Model Adaptation. Bin Yuan et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Speaker-discriminative-score.png|200px]]&lt;br /&gt;
*[[媒体文件:DNN-based Discriminative Scoring for Speaker.pdf|TRP-20150002: DNN-based Discriminative Scoring for Speaker Recognition Based on i-vector. Jun Wang et al. ]]&lt;br /&gt;
&lt;br /&gt;
[[文件:Noisy-traiing.png|200px]]&lt;br /&gt;
*[[媒体文件:Noisy Training for Deep Neural Networks in.pdf|TRP-20150001: Noisy Training for Deep Neural Networks in Speech Recognition. Shi Yin et al.]]&lt;br /&gt;
&lt;br /&gt;
[[文件:English-scroing.png|200px]]&lt;br /&gt;
*[[媒体文件:AutomaticScoringforEnglishUtterances.pdf|TRP-20140001: Automatic Scoring for English Utterances. Bo Hu.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Template.rar|CSLT-TRP latex template]]&lt;/div&gt;</summary>
		<author><name>Lty</name></author>	</entry>

	</feed>