<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://index.cslt.org/mediawiki/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="zh-cn">
		<id>http://index.cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zhangsy</id>
		<title>cslt Wiki - 用户贡献 [zh-cn]</title>
		<link rel="self" type="application/atom+xml" href="http://index.cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zhangsy"/>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E7%89%B9%E6%AE%8A:%E7%94%A8%E6%88%B7%E8%B4%A1%E7%8C%AE/Zhangsy"/>
		<updated>2026-04-14T20:21:06Z</updated>
		<subtitle>用户贡献</subtitle>
		<generator>MediaWiki 1.23.3</generator>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-26</id>
		<title>NLP Status Report 2017-6-26</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-26"/>
				<updated>2017-06-26T05:04:09Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/6/26&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
*GRE style-based translation:&lt;br /&gt;
  use direct replacement to do post edit&lt;br /&gt;
  use RNNLM to distinguish similar words and then do replacement&lt;br /&gt;
*All these methods seem to fall short of part of speech and semantics&lt;br /&gt;
||&lt;br /&gt;
* figure out new ways to distinguish similar word pairs with consideration of semantics&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* share a paper&lt;br /&gt;
* deliver the first version of NMT code&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Weekly_meeting</id>
		<title>Weekly meeting</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Weekly_meeting"/>
				<updated>2017-06-24T01:08:14Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*Location: FIT-1-304&lt;br /&gt;
*Time: Monday, 7:00 PM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Speaker!! Title !! Materials !! On duty&lt;br /&gt;
|-&lt;br /&gt;
| 2012/08/27  ||Dong Wang  || Heterogeneous Convolutive Non-negative Sparse Coding ||[[媒体文件:Heterogeneous_convolutive_non-negative_sparse_coding.pdf|slides]] [http://homepages.inf.ed.ac.uk/v1dwang2/public/pdf/inerspeech2012-hetero.pdf paper] ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/03  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/10  || NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/17  ||WALEED ABDULLA||Auditory Based Feature Vectors for Speech Recognition ||[[媒体文件:AuditoryBasedFeatureVectors.pdf|slides]]||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2012/09/24  ||刘超|| N-gram FST indexing for Spoken Term Detection || [[媒体文件:120924-N_gram_FST_indexing_for_Spoken_Term_Detection-LC-0.pdf|slides]] ||尹聪&lt;br /&gt;
|-&lt;br /&gt;
|范淼||Micro-blogging, Wikipedia, Folksonomy, What's Next? ||[[媒体文件:120924-Micro-blogging, Wikipedia, Folksonomy, What's Next-FM--01-FM-.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/08 ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/15  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/10/22||Wu Xiaojun||speaker recognition in CSLT ||[[媒体文件:VPR_in_CSLT.pdf|slides]]||卡尔&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/10/29  ||王军||An overview of Automatic Speaker Diarization Systems || [[媒体文件:121027-Speaker Diarization-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/05  ||别凡虎||Experiments on Emotional Speaker Recognition||[[媒体文件:121104-Experiments_on_Emotional_Speaker_Recognition-BFH.pdf|slides]] ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/12  ||唐国瑜||Statistical Word Sense Improves Document Clustering ||[[媒体文件:121112_Statistical_Word_Sense_Improves_Document_Clustering_TGY.pdf‎ |slides]]||邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/19  ||张陈昊||TDSR with Long-term Features Based on Functional Data Analysis||[[媒体文件:121118-ISCSLP-FDA_SR-ZCH.pdf|slides]] ||王俊俊&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/26  ||王琳琳||Time-Varying Speaker Recognition: An Introduction||[[媒体文件:121126-Time_Varying_Speaker_Recognition_I-Wll.pdf‎|slides]] ||龚宬&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/03  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/10  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/17  ||No meeting|| || ||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/01/07  ||王军||基于DF-MAP的说话人模型训练方法||[[媒体文件:130107-基于DFMAP的说话人模型训练方法-WJ.pdf|slides]] ||唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/14  ||王东|| Computing in CSLT ||[[媒体文件:Computing_in_CSLT.pdf|slides]] ||王琳琳&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/04  ||王军||Sequential Adaptive Learning for Speaker Verification ||[[媒体文件:130301-Sequential adaptive learning for speaker verification-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/11  || Du Jinle|| VAD stuff || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/18  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/25  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/08  || 张陈昊|| A Fishervoice based Feature Fusion Method for SUSR ||[[媒体文件:130408-FisherVoice-ZCH.pdf|slides]] ||谢仲达&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/15  ||龚宬|| An Exploration on Influence Factors of VAD's Performance in Speaker Recognition ||[[媒体文件:130415-An_Exploration_on_Influence_Factors_of_VAD-GC.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/22  ||王俊俊 || Understanding the Query: THCIB and THUIS at NTCIR-10 Intent Task ||[[媒体文件:130422-Understanding_the_Query-WJJ.pdf|slides‎]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/29  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/06  ||别凡虎 ||MLLR on Emotional Speaker Recognition ||[[媒体文件:130506-MLLR on Emotional Speaker Recognition-BFH.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/13  ||刘超 || The Use of Deep Neural Network for Speech Recognition || [[媒体文件:130513-the_use_of_dnn_for_asr-lc.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/27  ||王琳琳|| 说话人识别中的时变鲁棒性问题研究 || [[媒体文件:130527-TVSV-Wll.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/03  ||王俊俊|| 汉语搜索结果聚类系统研究与实现 || [[媒体文件:130601-毕业答辩-02-WJJ.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/10  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/17  ||范淼 || Relation Extraction ||[[媒体文件:130617-relation_extraction-fm.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/24  ||唐国瑜 || Incorporating Statistical Word Senses in Topic Model  ||[[媒体文件:130624_Incorporating Statistical Word Senses in Topic Model_TGY.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/08  ||  || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/15  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/09  ||王东 || Research Frontier in Speech Technology||[[媒体文件:Research Frontier in Speech Technology.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/16  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/30  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/14  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/21  ||范淼 ||Transduction Classification with Matrix Completion （中文报告）||[[媒体文件: Transduction_Classifiction_with_Matrix_Completion.pdf‎|slides]] [http://pages.cs.wisc.edu/~jerryzhu/pub/mc4ssl_FINAL.pdf paper]|| 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/28  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/04  || 王军 || 基于i-vector的intersession补偿及打分方法(综述) || [[媒体文件:131104-ivecto下intersession补偿及打分方法--01-WJ-.pdf‎|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/11  ||张陈昊 ||PLDA介绍及PLDA在说话人识别中的应用 ||[[媒体文件:PLDA.pdf|slides]] || 唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/18  ||别凡虎 ||i-vector理论介绍（讨论）||[[媒体文件:131118-i-vector_and_GMM-UBM-BFH.pdf|slides]]‎  ||王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/25  ||刘超 || Pruning Neural Networks By Optimal Brain Damage(综述)||[[媒体文件:131125-OBD-LC-01.pdf|slides]] ||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/02  ||范淼 ||Distant Supervision for Relation Extraction with Matrix Completion （英文报告）||[[媒体文件:131202-DRMC-FM-01.pdf|slides]] || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/09  || Dong Wang|| Introduction to the HMM-based speech synthesis||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/16  ||张陈昊 ||语音研究中的基元介绍 ||[[媒体文件:131215-Phonology-ZCH.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || Dong Wang|| Introduction to the HMM-based speech synthesis (2)||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/30  ||刘荣 || continuous space language model||[[媒体文件:Cslm-cslt.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/06  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/13  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/02/24  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/03  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/10  ||范淼|| Distant Supervision for Information Extraction (英文报告)|| || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/17  ||唐国瑜 || Topic Models Incorporating Statistical Word Senses || [[媒体文件:TMISWS_For_CICLing2014.pdf|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/24  ||孟祥涛 || Noisy training for Deep Neural Networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/31  ||范淼|| Translating Embeddings for Modeling Multi-relational Data （中文报告） || [https://www.hds.utc.fr/everest/lib/exe/fetch.php?id=en%3Atranse&amp;amp;cache=cache&amp;amp;media=en:cr_paper_nips13.pdf paper]||李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/14  || Wang Jun|| I-vector and PLDA in depth ||[[媒体文件:131104-ivector-microsoft-wj.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/21  || 邱晗||汉语事件句式规范化处理 ||[[媒体文件:140421-汉语事件句式规范化-QH.pdf‎|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/28  || 唐国瑜|| Some papers in　CICLing2014 ||[[媒体文件:Some_papers_in_CICling2014.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/05  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/12  || 卡尔|| paper introduction || [[媒体文件:Acoustic Factor Analysis.pdf|slides]] || 邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2014/05/19  || 邱晗|| 汉语事件句式CCG推导树重构 ||[[媒体文件:140519-CCG_reConstruction.pdf‎|slides]]‎|| 卡尔&lt;br /&gt;
|-&lt;br /&gt;
|Liu Chao|| master proposal: sparse and deep neural networks || [[媒体文件:140519-proposal-LC-01.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| || Liu Chao|| 2nd master proposal: sparse and deep neural networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/16  || 别凡虎 || Truncated Wave based VPR and Some Recent Work || [[媒体文件:140614-Truncated_Speech_based_VPR.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/23  || 别凡虎 || Block-wise training for I-vector || [[媒体文件:140623-Block-wise training for I-vector.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/07/07||王军 ||Discriminative Scoring for Speaker Recognition Based on I-vectors || [[媒体文件:140707-work_report.pdf|slides]]|| 王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/01|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/09 ||别凡虎 ||Reseach on Truncated Wave based VPR||[[媒体文件:140909-Truncated Speech based VPR.pdf|slides]] || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/15|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/22  || Miao Fan|| Large-scale Entity Relation Extraction based on Low-dimensional Representations (中文报告，博士开题)&lt;br /&gt;
||[[媒体文件:基于低维表示的大规模实体关系挖掘技术.pdf‎|slides]] || Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/29 || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/13  || Miao Fan|| The Frontier of Knowledge Embedding （英文报告）|| [[媒体文件:The_Frontier_of_Knowledge_Embedding.pdf‎|slides]]|| Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/20  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/27  || Li Yi || Phonemes, Features, and Syllables: Converting Onset and Rime Inventories to Consonants and Vowels||[[媒体文件:Lanzhou Phonemes, Features, and Syllables- fianl.pdf|paper]] [[媒体文件:Syllables and phonemes - 20141027.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/3   || 米吉提|| Automatic Speech Recognition of Agglutinative Language based on Lexicon Optimization||[[媒体文件:Mijit-slides-清华大学-2014-11-3.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/10  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/17  ||Dong Wang || Highly restricted keyword spotting for Uyghur using sparse analysis|| [[媒体文件:Highly Restricted Keyword Selection Based on Sparse Analysis.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/24  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/1  ||ZhongDa Xie ||Incorporating Fine-Grained Ontological Relations in Medical Document Ranking || [[媒体文件:Fine-grained_relations.pdf|slides]]|| Lantian Li &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/8  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/15  || 唐国瑜 || 跨语言话题分析关键技术研究 ||[[媒体文件:141205-答辩-TGY.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/22  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/29  || Askar || Language Mismatch in Speaker Recognition System||[[媒体文件:141229--askar.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/5  ||Lantian Li || Deep Neural Networks for Speaker Recognition || [[媒体文件:150104_Deep_Neural_Networks_for_Speaker_Recognition_LLT.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/12  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/19  || Dong Wang || Machine Learning Paradigms for Speech Recognition||[[媒体文件:Machine Learning Paradigms for Speech Recognition.pdf|slides]]  [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6423821 paper] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/26  || Chen Guorong || Information Transmission and Distribution on Web ||[[媒体文件:An_introduction_of_complex_network1.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot; |2015/3/9 || Dong Wang || Joint Deep Learning || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/3/16  || Dongxu Zhang || Knowledge learning from text data and knowledge bases || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/4/13  || Xuewei Zhang || Lasso-based Reverberation Suppression In Automatic Speech Recognition || [[媒体文件:Lasso-based Reverberation Suppression In Automatic Speech Recognition.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/5/11  || Dong Wang ||ASR and SID Research Frontier ||[[媒体文件:ASR and SID Research Frontier.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/23  || Zhiyuan Tang|| CTC learning|| [[媒体文件:CTC.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/30  || Mengyuan Zhao|| CNN-based music removal|| [[媒体文件:Music Removal by Convolutional Denoising.pdf | slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/3  || Zhiyuan Tang|| Networks of Memory|| [[媒体文件:Memory_net.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/7  || Yiqiao Pan|| Document Classification with Spherical Word Vectors||[[媒体文件:Document Classification with Spherical Word Vectors.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/14  || Dong Wang || Transfer Learning for Speech and Language Processing ||[[媒体文件:Transfer_Learning_for_Speech_and_Language_Processing.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/21  || Qixin Wang || Attention for poem generation ||[[媒体文件:Ijcai 2016.pptx|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/28  || Lantian Li || Max-margin metric learning for speaker recognition || [[媒体文件:Max-margin-Metric-Learning.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/4  || Zhiyong Zhang || Parallel training,MPE and natural gradient||[[媒体文件:20160104_张之勇_Large-scale Parallel Training in Speech Recognition.pdf|slides]]||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/18  || Dongxu Zhang || Memoryless Document Vector ||[[媒体文件:Memoryless_document_vector.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/14  || Zhiyuan Tang|| Oral presentation for &amp;quot;vMF-SNE: Embedding for Spherical Data&amp;quot;|| [[媒体文件:embedding.pdf|slides]] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/28  || Tianyi Luo || Review for Neural QA || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/29/CSLT_Weekly_Report--20160328.pdf slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/4/11  || Rong Liu || Recommendation in Youku || [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Cslt%E5%AE%9E%E9%AA%8C%E5%AE%A4%E4%BA%A4%E6%B5%81.pptx slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/09 || Miao Fan || Learning contextual embeddings of knowledge base with entity descriptions.|| [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9c/Techreport_CSLT_2016_M.F..pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/16 || Yang Wang || Research on conversation thread detection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%AA%E6%B4%8B-%E6%AF%95%E8%AE%BE-CSLT.pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20 || Yang Wang &amp;amp;  Maoning Wang || Research on portfolio selection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/89/%E6%B1%AA%E6%B4%8B-%E9%87%91%E8%9E%8D%E7%AC%AC%E4%B8%80%E6%AC%A1%E5%88%86%E4%BA%AB.pdf slides1]  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%87%E6%8A%A5_%E8%B5%84%E4%BA%A7%E7%BB%84%E5%90%88%E4%B8%AD%E5%87%A0%E4%B8%AA%E8%AF%84%E4%BB%B7%E6%8C%87%E6%A0%87%E7%9A%84%E8%A7%A3%E9%87%8A.pdf slides2]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20  || Zhiyuan Tang || ICASSP 2016 summary || [[媒体文件:Note icassp16.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/23 || Dong Wang || graphical model and neural model || [[媒体文件:Graphic Model and Neural Model.pdf|slides]] [[媒体文件:Generative-Pdf.rar|papers]]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/02 || Zhiyuan Tang || Visualizing, Measuring and Understanding Neural Networks: A Brief Survey|| [[媒体文件:Nn analysis.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/03 || Yang Wang || Neural networks and genetic programming for financial forecasting || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/79/GeneticNN.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/05 || Yang Wang || Reinforcement Learning Models and Simulations || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/ca/RRL_and_sim.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/08 || April Pu || SOFTWARE DEVELIPMENT METHODOLOGIES || [http://wangd.cslt.org/talks/pdf/april_software.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/12 || Yang Wang || Generative Adversarial Nets || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c9/Generative_adversarial_network.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/22 || Zhiyuan Tang || INTERSPEECH 2016 summary || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/65/Interspeech16_review.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/30 || Dong Wang || Deep and sparse learning in speech and language: an overview || [http://wangd.cslt.org/talks/pdf/bics2016.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/2/17 || Yang Wang || Review understanding deep learning requires rethinking generalization || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3b/Review_understanding_deep_learning_requires_rethinking_generalization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/5 || Dong Wang || Deep speech factorization || [http://wangd.cslt.org/talks/pdf/Deep-Speech-Factorization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/8 || Shiyue Zhang || Convolutional Sequence to Sequence Learning  || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f3/Conv_seq2seq.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/12 || Shiyue Zhang || Memory-augmented Neural Machine Translation || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/36/Memory-augmented_Neural_Machine_Translation_.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/21 || Shiyue Zhang || Attention Is All You Need  || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/68/Attention_is_all_you_need.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Weekly_meeting</id>
		<title>Weekly meeting</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Weekly_meeting"/>
				<updated>2017-06-24T01:07:05Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*Location: FIT-1-304&lt;br /&gt;
*Time: Monday, 7:00 PM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Speaker!! Title !! Materials !! On duty&lt;br /&gt;
|-&lt;br /&gt;
| 2012/08/27  ||Dong Wang  || Heterogeneous Convolutive Non-negative Sparse Coding ||[[媒体文件:Heterogeneous_convolutive_non-negative_sparse_coding.pdf|slides]] [http://homepages.inf.ed.ac.uk/v1dwang2/public/pdf/inerspeech2012-hetero.pdf paper] ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/03  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/10  || NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/17  ||WALEED ABDULLA||Auditory Based Feature Vectors for Speech Recognition ||[[媒体文件:AuditoryBasedFeatureVectors.pdf|slides]]||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2012/09/24  ||刘超|| N-gram FST indexing for Spoken Term Detection || [[媒体文件:120924-N_gram_FST_indexing_for_Spoken_Term_Detection-LC-0.pdf|slides]] ||尹聪&lt;br /&gt;
|-&lt;br /&gt;
|范淼||Micro-blogging, Wikipedia, Folksonomy, What's Next? ||[[媒体文件:120924-Micro-blogging, Wikipedia, Folksonomy, What's Next-FM--01-FM-.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/08 ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/15  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/10/22||Wu Xiaojun||speaker recognition in CSLT ||[[媒体文件:VPR_in_CSLT.pdf|slides]]||卡尔&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/10/29  ||王军||An overview of Automatic Speaker Diarization Systems || [[媒体文件:121027-Speaker Diarization-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/05  ||别凡虎||Experiments on Emotional Speaker Recognition||[[媒体文件:121104-Experiments_on_Emotional_Speaker_Recognition-BFH.pdf|slides]] ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/12  ||唐国瑜||Statistical Word Sense Improves Document Clustering ||[[媒体文件:121112_Statistical_Word_Sense_Improves_Document_Clustering_TGY.pdf‎ |slides]]||邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/19  ||张陈昊||TDSR with Long-term Features Based on Functional Data Analysis||[[媒体文件:121118-ISCSLP-FDA_SR-ZCH.pdf|slides]] ||王俊俊&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/26  ||王琳琳||Time-Varying Speaker Recognition: An Introduction||[[媒体文件:121126-Time_Varying_Speaker_Recognition_I-Wll.pdf‎|slides]] ||龚宬&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/03  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/10  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/17  ||No meeting|| || ||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/01/07  ||王军||基于DF-MAP的说话人模型训练方法||[[媒体文件:130107-基于DFMAP的说话人模型训练方法-WJ.pdf|slides]] ||唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/14  ||王东|| Computing in CSLT ||[[媒体文件:Computing_in_CSLT.pdf|slides]] ||王琳琳&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/04  ||王军||Sequential Adaptive Learning for Speaker Verification ||[[媒体文件:130301-Sequential adaptive learning for speaker verification-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/11  || Du Jinle|| VAD stuff || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/18  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/25  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/08  || 张陈昊|| A Fishervoice based Feature Fusion Method for SUSR ||[[媒体文件:130408-FisherVoice-ZCH.pdf|slides]] ||谢仲达&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/15  ||龚宬|| An Exploration on Influence Factors of VAD's Performance in Speaker Recognition ||[[媒体文件:130415-An_Exploration_on_Influence_Factors_of_VAD-GC.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/22  ||王俊俊 || Understanding the Query: THCIB and THUIS at NTCIR-10 Intent Task ||[[媒体文件:130422-Understanding_the_Query-WJJ.pdf|slides‎]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/29  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/06  ||别凡虎 ||MLLR on Emotional Speaker Recognition ||[[媒体文件:130506-MLLR on Emotional Speaker Recognition-BFH.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/13  ||刘超 || The Use of Deep Neural Network for Speech Recognition || [[媒体文件:130513-the_use_of_dnn_for_asr-lc.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/27  ||王琳琳|| 说话人识别中的时变鲁棒性问题研究 || [[媒体文件:130527-TVSV-Wll.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/03  ||王俊俊|| 汉语搜索结果聚类系统研究与实现 || [[媒体文件:130601-毕业答辩-02-WJJ.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/10  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/17  ||范淼 || Relation Extraction ||[[媒体文件:130617-relation_extraction-fm.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/24  ||唐国瑜 || Incorporating Statistical Word Senses in Topic Model  ||[[媒体文件:130624_Incorporating Statistical Word Senses in Topic Model_TGY.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/08  ||  || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/15  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/09  ||王东 || Research Frontier in Speech Technology||[[媒体文件:Research Frontier in Speech Technology.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/16  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/30  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/14  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/21  ||范淼 ||Transduction Classification with Matrix Completion （中文报告）||[[媒体文件: Transduction_Classifiction_with_Matrix_Completion.pdf‎|slides]] [http://pages.cs.wisc.edu/~jerryzhu/pub/mc4ssl_FINAL.pdf paper]|| 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/28  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/04  || 王军 || 基于i-vector的intersession补偿及打分方法(综述) || [[媒体文件:131104-ivecto下intersession补偿及打分方法--01-WJ-.pdf‎|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/11  ||张陈昊 ||PLDA介绍及PLDA在说话人识别中的应用 ||[[媒体文件:PLDA.pdf|slides]] || 唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/18  ||别凡虎 ||i-vector理论介绍（讨论）||[[媒体文件:131118-i-vector_and_GMM-UBM-BFH.pdf|slides]]‎  ||王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/25  ||刘超 || Pruning Neural Networks By Optimal Brain Damage(综述)||[[媒体文件:131125-OBD-LC-01.pdf|slides]] ||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/02  ||范淼 ||Distant Supervision for Relation Extraction with Matrix Completion （英文报告）||[[媒体文件:131202-DRMC-FM-01.pdf|slides]] || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/09  || Dong Wang|| Introduction to the HMM-based speech synthesis||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/16  ||张陈昊 ||语音研究中的基元介绍 ||[[媒体文件:131215-Phonology-ZCH.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || Dong Wang|| Introduction to the HMM-based speech synthesis (2)||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/30  ||刘荣 || continuous space language model||[[媒体文件:Cslm-cslt.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/06  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/13  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/02/24  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/03  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/10  ||范淼|| Distant Supervision for Information Extraction (英文报告)|| || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/17  ||唐国瑜 || Topic Models Incorporating Statistical Word Senses || [[媒体文件:TMISWS_For_CICLing2014.pdf|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/24  ||孟祥涛 || Noisy training for Deep Neural Networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/31  ||范淼|| Translating Embeddings for Modeling Multi-relational Data （中文报告） || [https://www.hds.utc.fr/everest/lib/exe/fetch.php?id=en%3Atranse&amp;amp;cache=cache&amp;amp;media=en:cr_paper_nips13.pdf paper]||李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/14  || Wang Jun|| I-vector and PLDA in depth ||[[媒体文件:131104-ivector-microsoft-wj.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/21  || 邱晗||汉语事件句式规范化处理 ||[[媒体文件:140421-汉语事件句式规范化-QH.pdf‎|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/28  || 唐国瑜|| Some papers in　CICLing2014 ||[[媒体文件:Some_papers_in_CICling2014.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/05  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/12  || 卡尔|| paper introduction || [[媒体文件:Acoustic Factor Analysis.pdf|slides]] || 邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2014/05/19  || 邱晗|| 汉语事件句式CCG推导树重构 ||[[媒体文件:140519-CCG_reConstruction.pdf‎|slides]]‎|| 卡尔&lt;br /&gt;
|-&lt;br /&gt;
|Liu Chao|| master proposal: sparse and deep neural networks || [[媒体文件:140519-proposal-LC-01.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| || Liu Chao|| 2nd master proposal: sparse and deep neural networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/16  || 别凡虎 || Truncated Wave based VPR and Some Recent Work || [[媒体文件:140614-Truncated_Speech_based_VPR.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/23  || 别凡虎 || Block-wise training for I-vector || [[媒体文件:140623-Block-wise training for I-vector.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/07/07||王军 ||Discriminative Scoring for Speaker Recognition Based on I-vectors || [[媒体文件:140707-work_report.pdf|slides]]|| 王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/01|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/09 ||别凡虎 ||Reseach on Truncated Wave based VPR||[[媒体文件:140909-Truncated Speech based VPR.pdf|slides]] || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/15|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/22  || Miao Fan|| Large-scale Entity Relation Extraction based on Low-dimensional Representations (中文报告，博士开题)&lt;br /&gt;
||[[媒体文件:基于低维表示的大规模实体关系挖掘技术.pdf‎|slides]] || Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/29 || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/13  || Miao Fan|| The Frontier of Knowledge Embedding （英文报告）|| [[媒体文件:The_Frontier_of_Knowledge_Embedding.pdf‎|slides]]|| Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/20  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/27  || Li Yi || Phonemes, Features, and Syllables: Converting Onset and Rime Inventories to Consonants and Vowels||[[媒体文件:Lanzhou Phonemes, Features, and Syllables- fianl.pdf|paper]] [[媒体文件:Syllables and phonemes - 20141027.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/3   || 米吉提|| Automatic Speech Recognition of Agglutinative Language based on Lexicon Optimization||[[媒体文件:Mijit-slides-清华大学-2014-11-3.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/10  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/17  ||Dong Wang || Highly restricted keyword spotting for Uyghur using sparse analysis|| [[媒体文件:Highly Restricted Keyword Selection Based on Sparse Analysis.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/24  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/1  ||ZhongDa Xie ||Incorporating Fine-Grained Ontological Relations in Medical Document Ranking || [[媒体文件:Fine-grained_relations.pdf|slides]]|| Lantian Li &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/8  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/15  || 唐国瑜 || 跨语言话题分析关键技术研究 ||[[媒体文件:141205-答辩-TGY.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/22  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/29  || Askar || Language Mismatch in Speaker Recognition System||[[媒体文件:141229--askar.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/5  ||Lantian Li || Deep Neural Networks for Speaker Recognition || [[媒体文件:150104_Deep_Neural_Networks_for_Speaker_Recognition_LLT.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/12  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/19  || Dong Wang || Machine Learning Paradigms for Speech Recognition||[[媒体文件:Machine Learning Paradigms for Speech Recognition.pdf|slides]]  [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6423821 paper] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/26  || Chen Guorong || Information Transmission and Distribution on Web ||[[媒体文件:An_introduction_of_complex_network1.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot; |2015/3/9 || Dong Wang || Joint Deep Learning || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/3/16  || Dongxu Zhang || Knowledge learning from text data and knowledge bases || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/4/13  || Xuewei Zhang || Lasso-based Reverberation Suppression In Automatic Speech Recognition || [[媒体文件:Lasso-based Reverberation Suppression In Automatic Speech Recognition.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/5/11  || Dong Wang ||ASR and SID Research Frontier ||[[媒体文件:ASR and SID Research Frontier.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/23  || Zhiyuan Tang|| CTC learning|| [[媒体文件:CTC.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/30  || Mengyuan Zhao|| CNN-based music removal|| [[媒体文件:Music Removal by Convolutional Denoising.pdf | slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/3  || Zhiyuan Tang|| Networks of Memory|| [[媒体文件:Memory_net.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/7  || Yiqiao Pan|| Document Classification with Spherical Word Vectors||[[媒体文件:Document Classification with Spherical Word Vectors.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/14  || Dong Wang || Transfer Learning for Speech and Language Processing ||[[媒体文件:Transfer_Learning_for_Speech_and_Language_Processing.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/21  || Qixin Wang || Attention for poem generation ||[[媒体文件:Ijcai 2016.pptx|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/28  || Lantian Li || Max-margin metric learning for speaker recognition || [[媒体文件:Max-margin-Metric-Learning.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/4  || Zhiyong Zhang || Parallel training,MPE and natural gradient||[[媒体文件:20160104_张之勇_Large-scale Parallel Training in Speech Recognition.pdf|slides]]||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/18  || Dongxu Zhang || Memoryless Document Vector ||[[媒体文件:Memoryless_document_vector.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/14  || Zhiyuan Tang|| Oral presentation for &amp;quot;vMF-SNE: Embedding for Spherical Data&amp;quot;|| [[媒体文件:embedding.pdf|slides]] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/28  || Tianyi Luo || Review for Neural QA || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/29/CSLT_Weekly_Report--20160328.pdf slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/4/11  || Rong Liu || Recommendation in Youku || [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Cslt%E5%AE%9E%E9%AA%8C%E5%AE%A4%E4%BA%A4%E6%B5%81.pptx slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/09 || Miao Fan || Learning contextual embeddings of knowledge base with entity descriptions.|| [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9c/Techreport_CSLT_2016_M.F..pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/16 || Yang Wang || Research on conversation thread detection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%AA%E6%B4%8B-%E6%AF%95%E8%AE%BE-CSLT.pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20 || Yang Wang &amp;amp;  Maoning Wang || Research on portfolio selection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/89/%E6%B1%AA%E6%B4%8B-%E9%87%91%E8%9E%8D%E7%AC%AC%E4%B8%80%E6%AC%A1%E5%88%86%E4%BA%AB.pdf slides1]  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%87%E6%8A%A5_%E8%B5%84%E4%BA%A7%E7%BB%84%E5%90%88%E4%B8%AD%E5%87%A0%E4%B8%AA%E8%AF%84%E4%BB%B7%E6%8C%87%E6%A0%87%E7%9A%84%E8%A7%A3%E9%87%8A.pdf slides2]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20  || Zhiyuan Tang || ICASSP 2016 summary || [[媒体文件:Note icassp16.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/23 || Dong Wang || graphical model and neural model || [[媒体文件:Graphic Model and Neural Model.pdf|slides]] [[媒体文件:Generative-Pdf.rar|papers]]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/02 || Zhiyuan Tang || Visualizing, Measuring and Understanding Neural Networks: A Brief Survey|| [[媒体文件:Nn analysis.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/03 || Yang Wang || Neural networks and genetic programming for financial forecasting || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/79/GeneticNN.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/05 || Yang Wang || Reinforcement Learning Models and Simulations || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/ca/RRL_and_sim.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/08 || April Pu || SOFTWARE DEVELIPMENT METHODOLOGIES || [http://wangd.cslt.org/talks/pdf/april_software.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/12 || Yang Wang || Generative Adversarial Nets || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c9/Generative_adversarial_network.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/22 || Zhiyuan Tang || INTERSPEECH 2016 summary || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/65/Interspeech16_review.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/30 || Dong Wang || Deep and sparse learning in speech and language: an overview || [http://wangd.cslt.org/talks/pdf/bics2016.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/2/17 || Yang Wang || Review understanding deep learning requires rethinking generalization || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3b/Review_understanding_deep_learning_requires_rethinking_generalization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/5 || Dong Wang || Deep speech factorization || [http://wangd.cslt.org/talks/pdf/Deep-Speech-Factorization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/8 || Shiyue Zhang || Convolutional Sequence to Sequence Learning  || [ slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/12 || Shiyue Zhang || Memory-augmented Neural Machine Translation || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/36/Memory-augmented_Neural_Machine_Translation_.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/21 || Shiyue Zhang || Attention Is All You Need  || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/68/Attention_is_all_you_need.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Attention_is_all_you_need.pptx</id>
		<title>文件:Attention is all you need.pptx</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Attention_is_all_you_need.pptx"/>
				<updated>2017-06-24T01:06:30Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：paper sharing: Attention_is_all_you_need&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;paper sharing: Attention_is_all_you_need&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Weekly_meeting</id>
		<title>Weekly meeting</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Weekly_meeting"/>
				<updated>2017-06-24T01:05:03Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*Location: FIT-1-304&lt;br /&gt;
*Time: Monday, 7:00 PM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Speaker!! Title !! Materials !! On duty&lt;br /&gt;
|-&lt;br /&gt;
| 2012/08/27  ||Dong Wang  || Heterogeneous Convolutive Non-negative Sparse Coding ||[[媒体文件:Heterogeneous_convolutive_non-negative_sparse_coding.pdf|slides]] [http://homepages.inf.ed.ac.uk/v1dwang2/public/pdf/inerspeech2012-hetero.pdf paper] ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/03  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/10  || NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/17  ||WALEED ABDULLA||Auditory Based Feature Vectors for Speech Recognition ||[[媒体文件:AuditoryBasedFeatureVectors.pdf|slides]]||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2012/09/24  ||刘超|| N-gram FST indexing for Spoken Term Detection || [[媒体文件:120924-N_gram_FST_indexing_for_Spoken_Term_Detection-LC-0.pdf|slides]] ||尹聪&lt;br /&gt;
|-&lt;br /&gt;
|范淼||Micro-blogging, Wikipedia, Folksonomy, What's Next? ||[[媒体文件:120924-Micro-blogging, Wikipedia, Folksonomy, What's Next-FM--01-FM-.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/08 ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/15  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/10/22||Wu Xiaojun||speaker recognition in CSLT ||[[媒体文件:VPR_in_CSLT.pdf|slides]]||卡尔&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/10/29  ||王军||An overview of Automatic Speaker Diarization Systems || [[媒体文件:121027-Speaker Diarization-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/05  ||别凡虎||Experiments on Emotional Speaker Recognition||[[媒体文件:121104-Experiments_on_Emotional_Speaker_Recognition-BFH.pdf|slides]] ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/12  ||唐国瑜||Statistical Word Sense Improves Document Clustering ||[[媒体文件:121112_Statistical_Word_Sense_Improves_Document_Clustering_TGY.pdf‎ |slides]]||邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/19  ||张陈昊||TDSR with Long-term Features Based on Functional Data Analysis||[[媒体文件:121118-ISCSLP-FDA_SR-ZCH.pdf|slides]] ||王俊俊&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/26  ||王琳琳||Time-Varying Speaker Recognition: An Introduction||[[媒体文件:121126-Time_Varying_Speaker_Recognition_I-Wll.pdf‎|slides]] ||龚宬&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/03  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/10  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/17  ||No meeting|| || ||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/01/07  ||王军||基于DF-MAP的说话人模型训练方法||[[媒体文件:130107-基于DFMAP的说话人模型训练方法-WJ.pdf|slides]] ||唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/14  ||王东|| Computing in CSLT ||[[媒体文件:Computing_in_CSLT.pdf|slides]] ||王琳琳&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/04  ||王军||Sequential Adaptive Learning for Speaker Verification ||[[媒体文件:130301-Sequential adaptive learning for speaker verification-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/11  || Du Jinle|| VAD stuff || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/18  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/25  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/08  || 张陈昊|| A Fishervoice based Feature Fusion Method for SUSR ||[[媒体文件:130408-FisherVoice-ZCH.pdf|slides]] ||谢仲达&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/15  ||龚宬|| An Exploration on Influence Factors of VAD's Performance in Speaker Recognition ||[[媒体文件:130415-An_Exploration_on_Influence_Factors_of_VAD-GC.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/22  ||王俊俊 || Understanding the Query: THCIB and THUIS at NTCIR-10 Intent Task ||[[媒体文件:130422-Understanding_the_Query-WJJ.pdf|slides‎]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/29  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/06  ||别凡虎 ||MLLR on Emotional Speaker Recognition ||[[媒体文件:130506-MLLR on Emotional Speaker Recognition-BFH.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/13  ||刘超 || The Use of Deep Neural Network for Speech Recognition || [[媒体文件:130513-the_use_of_dnn_for_asr-lc.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/27  ||王琳琳|| 说话人识别中的时变鲁棒性问题研究 || [[媒体文件:130527-TVSV-Wll.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/03  ||王俊俊|| 汉语搜索结果聚类系统研究与实现 || [[媒体文件:130601-毕业答辩-02-WJJ.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/10  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/17  ||范淼 || Relation Extraction ||[[媒体文件:130617-relation_extraction-fm.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/24  ||唐国瑜 || Incorporating Statistical Word Senses in Topic Model  ||[[媒体文件:130624_Incorporating Statistical Word Senses in Topic Model_TGY.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/08  ||  || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/15  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/09  ||王东 || Research Frontier in Speech Technology||[[媒体文件:Research Frontier in Speech Technology.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/16  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/30  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/14  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/21  ||范淼 ||Transduction Classification with Matrix Completion （中文报告）||[[媒体文件: Transduction_Classifiction_with_Matrix_Completion.pdf‎|slides]] [http://pages.cs.wisc.edu/~jerryzhu/pub/mc4ssl_FINAL.pdf paper]|| 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/28  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/04  || 王军 || 基于i-vector的intersession补偿及打分方法(综述) || [[媒体文件:131104-ivecto下intersession补偿及打分方法--01-WJ-.pdf‎|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/11  ||张陈昊 ||PLDA介绍及PLDA在说话人识别中的应用 ||[[媒体文件:PLDA.pdf|slides]] || 唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/18  ||别凡虎 ||i-vector理论介绍（讨论）||[[媒体文件:131118-i-vector_and_GMM-UBM-BFH.pdf|slides]]‎  ||王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/25  ||刘超 || Pruning Neural Networks By Optimal Brain Damage(综述)||[[媒体文件:131125-OBD-LC-01.pdf|slides]] ||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/02  ||范淼 ||Distant Supervision for Relation Extraction with Matrix Completion （英文报告）||[[媒体文件:131202-DRMC-FM-01.pdf|slides]] || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/09  || Dong Wang|| Introduction to the HMM-based speech synthesis||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/16  ||张陈昊 ||语音研究中的基元介绍 ||[[媒体文件:131215-Phonology-ZCH.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || Dong Wang|| Introduction to the HMM-based speech synthesis (2)||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/30  ||刘荣 || continuous space language model||[[媒体文件:Cslm-cslt.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/06  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/13  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/02/24  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/03  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/10  ||范淼|| Distant Supervision for Information Extraction (英文报告)|| || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/17  ||唐国瑜 || Topic Models Incorporating Statistical Word Senses || [[媒体文件:TMISWS_For_CICLing2014.pdf|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/24  ||孟祥涛 || Noisy training for Deep Neural Networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/31  ||范淼|| Translating Embeddings for Modeling Multi-relational Data （中文报告） || [https://www.hds.utc.fr/everest/lib/exe/fetch.php?id=en%3Atranse&amp;amp;cache=cache&amp;amp;media=en:cr_paper_nips13.pdf paper]||李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/14  || Wang Jun|| I-vector and PLDA in depth ||[[媒体文件:131104-ivector-microsoft-wj.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/21  || 邱晗||汉语事件句式规范化处理 ||[[媒体文件:140421-汉语事件句式规范化-QH.pdf‎|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/28  || 唐国瑜|| Some papers in　CICLing2014 ||[[媒体文件:Some_papers_in_CICling2014.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/05  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/12  || 卡尔|| paper introduction || [[媒体文件:Acoustic Factor Analysis.pdf|slides]] || 邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2014/05/19  || 邱晗|| 汉语事件句式CCG推导树重构 ||[[媒体文件:140519-CCG_reConstruction.pdf‎|slides]]‎|| 卡尔&lt;br /&gt;
|-&lt;br /&gt;
|Liu Chao|| master proposal: sparse and deep neural networks || [[媒体文件:140519-proposal-LC-01.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| || Liu Chao|| 2nd master proposal: sparse and deep neural networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/16  || 别凡虎 || Truncated Wave based VPR and Some Recent Work || [[媒体文件:140614-Truncated_Speech_based_VPR.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/23  || 别凡虎 || Block-wise training for I-vector || [[媒体文件:140623-Block-wise training for I-vector.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/07/07||王军 ||Discriminative Scoring for Speaker Recognition Based on I-vectors || [[媒体文件:140707-work_report.pdf|slides]]|| 王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/01|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/09 ||别凡虎 ||Reseach on Truncated Wave based VPR||[[媒体文件:140909-Truncated Speech based VPR.pdf|slides]] || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/15|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/22  || Miao Fan|| Large-scale Entity Relation Extraction based on Low-dimensional Representations (中文报告，博士开题)&lt;br /&gt;
||[[媒体文件:基于低维表示的大规模实体关系挖掘技术.pdf‎|slides]] || Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/29 || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/13  || Miao Fan|| The Frontier of Knowledge Embedding （英文报告）|| [[媒体文件:The_Frontier_of_Knowledge_Embedding.pdf‎|slides]]|| Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/20  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/27  || Li Yi || Phonemes, Features, and Syllables: Converting Onset and Rime Inventories to Consonants and Vowels||[[媒体文件:Lanzhou Phonemes, Features, and Syllables- fianl.pdf|paper]] [[媒体文件:Syllables and phonemes - 20141027.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/3   || 米吉提|| Automatic Speech Recognition of Agglutinative Language based on Lexicon Optimization||[[媒体文件:Mijit-slides-清华大学-2014-11-3.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/10  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/17  ||Dong Wang || Highly restricted keyword spotting for Uyghur using sparse analysis|| [[媒体文件:Highly Restricted Keyword Selection Based on Sparse Analysis.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/24  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/1  ||ZhongDa Xie ||Incorporating Fine-Grained Ontological Relations in Medical Document Ranking || [[媒体文件:Fine-grained_relations.pdf|slides]]|| Lantian Li &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/8  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/15  || 唐国瑜 || 跨语言话题分析关键技术研究 ||[[媒体文件:141205-答辩-TGY.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/22  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/29  || Askar || Language Mismatch in Speaker Recognition System||[[媒体文件:141229--askar.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/5  ||Lantian Li || Deep Neural Networks for Speaker Recognition || [[媒体文件:150104_Deep_Neural_Networks_for_Speaker_Recognition_LLT.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/12  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/19  || Dong Wang || Machine Learning Paradigms for Speech Recognition||[[媒体文件:Machine Learning Paradigms for Speech Recognition.pdf|slides]]  [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6423821 paper] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/26  || Chen Guorong || Information Transmission and Distribution on Web ||[[媒体文件:An_introduction_of_complex_network1.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot; |2015/3/9 || Dong Wang || Joint Deep Learning || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/3/16  || Dongxu Zhang || Knowledge learning from text data and knowledge bases || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/4/13  || Xuewei Zhang || Lasso-based Reverberation Suppression In Automatic Speech Recognition || [[媒体文件:Lasso-based Reverberation Suppression In Automatic Speech Recognition.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/5/11  || Dong Wang ||ASR and SID Research Frontier ||[[媒体文件:ASR and SID Research Frontier.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/23  || Zhiyuan Tang|| CTC learning|| [[媒体文件:CTC.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/30  || Mengyuan Zhao|| CNN-based music removal|| [[媒体文件:Music Removal by Convolutional Denoising.pdf | slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/3  || Zhiyuan Tang|| Networks of Memory|| [[媒体文件:Memory_net.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/7  || Yiqiao Pan|| Document Classification with Spherical Word Vectors||[[媒体文件:Document Classification with Spherical Word Vectors.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/14  || Dong Wang || Transfer Learning for Speech and Language Processing ||[[媒体文件:Transfer_Learning_for_Speech_and_Language_Processing.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/21  || Qixin Wang || Attention for poem generation ||[[媒体文件:Ijcai 2016.pptx|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/28  || Lantian Li || Max-margin metric learning for speaker recognition || [[媒体文件:Max-margin-Metric-Learning.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/4  || Zhiyong Zhang || Parallel training,MPE and natural gradient||[[媒体文件:20160104_张之勇_Large-scale Parallel Training in Speech Recognition.pdf|slides]]||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/18  || Dongxu Zhang || Memoryless Document Vector ||[[媒体文件:Memoryless_document_vector.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/14  || Zhiyuan Tang|| Oral presentation for &amp;quot;vMF-SNE: Embedding for Spherical Data&amp;quot;|| [[媒体文件:embedding.pdf|slides]] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/28  || Tianyi Luo || Review for Neural QA || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/29/CSLT_Weekly_Report--20160328.pdf slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/4/11  || Rong Liu || Recommendation in Youku || [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Cslt%E5%AE%9E%E9%AA%8C%E5%AE%A4%E4%BA%A4%E6%B5%81.pptx slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/09 || Miao Fan || Learning contextual embeddings of knowledge base with entity descriptions.|| [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9c/Techreport_CSLT_2016_M.F..pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/16 || Yang Wang || Research on conversation thread detection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%AA%E6%B4%8B-%E6%AF%95%E8%AE%BE-CSLT.pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20 || Yang Wang &amp;amp;  Maoning Wang || Research on portfolio selection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/89/%E6%B1%AA%E6%B4%8B-%E9%87%91%E8%9E%8D%E7%AC%AC%E4%B8%80%E6%AC%A1%E5%88%86%E4%BA%AB.pdf slides1]  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%87%E6%8A%A5_%E8%B5%84%E4%BA%A7%E7%BB%84%E5%90%88%E4%B8%AD%E5%87%A0%E4%B8%AA%E8%AF%84%E4%BB%B7%E6%8C%87%E6%A0%87%E7%9A%84%E8%A7%A3%E9%87%8A.pdf slides2]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20  || Zhiyuan Tang || ICASSP 2016 summary || [[媒体文件:Note icassp16.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/23 || Dong Wang || graphical model and neural model || [[媒体文件:Graphic Model and Neural Model.pdf|slides]] [[媒体文件:Generative-Pdf.rar|papers]]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/02 || Zhiyuan Tang || Visualizing, Measuring and Understanding Neural Networks: A Brief Survey|| [[媒体文件:Nn analysis.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/03 || Yang Wang || Neural networks and genetic programming for financial forecasting || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/79/GeneticNN.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/05 || Yang Wang || Reinforcement Learning Models and Simulations || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/ca/RRL_and_sim.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/08 || April Pu || SOFTWARE DEVELIPMENT METHODOLOGIES || [http://wangd.cslt.org/talks/pdf/april_software.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/12 || Yang Wang || Generative Adversarial Nets || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c9/Generative_adversarial_network.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/22 || Zhiyuan Tang || INTERSPEECH 2016 summary || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/65/Interspeech16_review.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/30 || Dong Wang || Deep and sparse learning in speech and language: an overview || [http://wangd.cslt.org/talks/pdf/bics2016.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/2/17 || Yang Wang || Review understanding deep learning requires rethinking generalization || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3b/Review_understanding_deep_learning_requires_rethinking_generalization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/5 || Dong Wang || Deep speech factorization || [http://wangd.cslt.org/talks/pdf/Deep-Speech-Factorization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/8 || Shiyue Zhang || Convolutional Sequence to Sequence Learning  || [ slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/12 || Shiyue Zhang || Memory-augmented Neural Machine Translation || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/36/Memory-augmented_Neural_Machine_Translation_.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/21 || Shiyue Zhang || Attention Is All You Need  || [ slides] || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Weekly_meeting</id>
		<title>Weekly meeting</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Weekly_meeting"/>
				<updated>2017-06-24T01:03:04Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*Location: FIT-1-304&lt;br /&gt;
*Time: Monday, 7:00 PM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Speaker!! Title !! Materials !! On duty&lt;br /&gt;
|-&lt;br /&gt;
| 2012/08/27  ||Dong Wang  || Heterogeneous Convolutive Non-negative Sparse Coding ||[[媒体文件:Heterogeneous_convolutive_non-negative_sparse_coding.pdf|slides]] [http://homepages.inf.ed.ac.uk/v1dwang2/public/pdf/inerspeech2012-hetero.pdf paper] ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/03  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/10  || NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/17  ||WALEED ABDULLA||Auditory Based Feature Vectors for Speech Recognition ||[[媒体文件:AuditoryBasedFeatureVectors.pdf|slides]]||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2012/09/24  ||刘超|| N-gram FST indexing for Spoken Term Detection || [[媒体文件:120924-N_gram_FST_indexing_for_Spoken_Term_Detection-LC-0.pdf|slides]] ||尹聪&lt;br /&gt;
|-&lt;br /&gt;
|范淼||Micro-blogging, Wikipedia, Folksonomy, What's Next? ||[[媒体文件:120924-Micro-blogging, Wikipedia, Folksonomy, What's Next-FM--01-FM-.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/08 ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/15  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/10/22||Wu Xiaojun||speaker recognition in CSLT ||[[媒体文件:VPR_in_CSLT.pdf|slides]]||卡尔&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/10/29  ||王军||An overview of Automatic Speaker Diarization Systems || [[媒体文件:121027-Speaker Diarization-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/05  ||别凡虎||Experiments on Emotional Speaker Recognition||[[媒体文件:121104-Experiments_on_Emotional_Speaker_Recognition-BFH.pdf|slides]] ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/12  ||唐国瑜||Statistical Word Sense Improves Document Clustering ||[[媒体文件:121112_Statistical_Word_Sense_Improves_Document_Clustering_TGY.pdf‎ |slides]]||邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/19  ||张陈昊||TDSR with Long-term Features Based on Functional Data Analysis||[[媒体文件:121118-ISCSLP-FDA_SR-ZCH.pdf|slides]] ||王俊俊&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/26  ||王琳琳||Time-Varying Speaker Recognition: An Introduction||[[媒体文件:121126-Time_Varying_Speaker_Recognition_I-Wll.pdf‎|slides]] ||龚宬&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/03  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/10  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/17  ||No meeting|| || ||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/01/07  ||王军||基于DF-MAP的说话人模型训练方法||[[媒体文件:130107-基于DFMAP的说话人模型训练方法-WJ.pdf|slides]] ||唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/14  ||王东|| Computing in CSLT ||[[媒体文件:Computing_in_CSLT.pdf|slides]] ||王琳琳&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/04  ||王军||Sequential Adaptive Learning for Speaker Verification ||[[媒体文件:130301-Sequential adaptive learning for speaker verification-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/11  || Du Jinle|| VAD stuff || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/18  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/25  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/08  || 张陈昊|| A Fishervoice based Feature Fusion Method for SUSR ||[[媒体文件:130408-FisherVoice-ZCH.pdf|slides]] ||谢仲达&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/15  ||龚宬|| An Exploration on Influence Factors of VAD's Performance in Speaker Recognition ||[[媒体文件:130415-An_Exploration_on_Influence_Factors_of_VAD-GC.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/22  ||王俊俊 || Understanding the Query: THCIB and THUIS at NTCIR-10 Intent Task ||[[媒体文件:130422-Understanding_the_Query-WJJ.pdf|slides‎]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/29  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/06  ||别凡虎 ||MLLR on Emotional Speaker Recognition ||[[媒体文件:130506-MLLR on Emotional Speaker Recognition-BFH.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/13  ||刘超 || The Use of Deep Neural Network for Speech Recognition || [[媒体文件:130513-the_use_of_dnn_for_asr-lc.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/27  ||王琳琳|| 说话人识别中的时变鲁棒性问题研究 || [[媒体文件:130527-TVSV-Wll.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/03  ||王俊俊|| 汉语搜索结果聚类系统研究与实现 || [[媒体文件:130601-毕业答辩-02-WJJ.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/10  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/17  ||范淼 || Relation Extraction ||[[媒体文件:130617-relation_extraction-fm.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/24  ||唐国瑜 || Incorporating Statistical Word Senses in Topic Model  ||[[媒体文件:130624_Incorporating Statistical Word Senses in Topic Model_TGY.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/08  ||  || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/15  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/09  ||王东 || Research Frontier in Speech Technology||[[媒体文件:Research Frontier in Speech Technology.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/16  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/30  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/14  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/21  ||范淼 ||Transduction Classification with Matrix Completion （中文报告）||[[媒体文件: Transduction_Classifiction_with_Matrix_Completion.pdf‎|slides]] [http://pages.cs.wisc.edu/~jerryzhu/pub/mc4ssl_FINAL.pdf paper]|| 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/28  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/04  || 王军 || 基于i-vector的intersession补偿及打分方法(综述) || [[媒体文件:131104-ivecto下intersession补偿及打分方法--01-WJ-.pdf‎|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/11  ||张陈昊 ||PLDA介绍及PLDA在说话人识别中的应用 ||[[媒体文件:PLDA.pdf|slides]] || 唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/18  ||别凡虎 ||i-vector理论介绍（讨论）||[[媒体文件:131118-i-vector_and_GMM-UBM-BFH.pdf|slides]]‎  ||王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/25  ||刘超 || Pruning Neural Networks By Optimal Brain Damage(综述)||[[媒体文件:131125-OBD-LC-01.pdf|slides]] ||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/02  ||范淼 ||Distant Supervision for Relation Extraction with Matrix Completion （英文报告）||[[媒体文件:131202-DRMC-FM-01.pdf|slides]] || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/09  || Dong Wang|| Introduction to the HMM-based speech synthesis||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/16  ||张陈昊 ||语音研究中的基元介绍 ||[[媒体文件:131215-Phonology-ZCH.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || Dong Wang|| Introduction to the HMM-based speech synthesis (2)||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/30  ||刘荣 || continuous space language model||[[媒体文件:Cslm-cslt.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/06  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/13  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/02/24  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/03  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/10  ||范淼|| Distant Supervision for Information Extraction (英文报告)|| || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/17  ||唐国瑜 || Topic Models Incorporating Statistical Word Senses || [[媒体文件:TMISWS_For_CICLing2014.pdf|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/24  ||孟祥涛 || Noisy training for Deep Neural Networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/31  ||范淼|| Translating Embeddings for Modeling Multi-relational Data （中文报告） || [https://www.hds.utc.fr/everest/lib/exe/fetch.php?id=en%3Atranse&amp;amp;cache=cache&amp;amp;media=en:cr_paper_nips13.pdf paper]||李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/14  || Wang Jun|| I-vector and PLDA in depth ||[[媒体文件:131104-ivector-microsoft-wj.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/21  || 邱晗||汉语事件句式规范化处理 ||[[媒体文件:140421-汉语事件句式规范化-QH.pdf‎|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/28  || 唐国瑜|| Some papers in　CICLing2014 ||[[媒体文件:Some_papers_in_CICling2014.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/05  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/12  || 卡尔|| paper introduction || [[媒体文件:Acoustic Factor Analysis.pdf|slides]] || 邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2014/05/19  || 邱晗|| 汉语事件句式CCG推导树重构 ||[[媒体文件:140519-CCG_reConstruction.pdf‎|slides]]‎|| 卡尔&lt;br /&gt;
|-&lt;br /&gt;
|Liu Chao|| master proposal: sparse and deep neural networks || [[媒体文件:140519-proposal-LC-01.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| || Liu Chao|| 2nd master proposal: sparse and deep neural networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/16  || 别凡虎 || Truncated Wave based VPR and Some Recent Work || [[媒体文件:140614-Truncated_Speech_based_VPR.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/23  || 别凡虎 || Block-wise training for I-vector || [[媒体文件:140623-Block-wise training for I-vector.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/07/07||王军 ||Discriminative Scoring for Speaker Recognition Based on I-vectors || [[媒体文件:140707-work_report.pdf|slides]]|| 王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/01|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/09 ||别凡虎 ||Reseach on Truncated Wave based VPR||[[媒体文件:140909-Truncated Speech based VPR.pdf|slides]] || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/15|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/22  || Miao Fan|| Large-scale Entity Relation Extraction based on Low-dimensional Representations (中文报告，博士开题)&lt;br /&gt;
||[[媒体文件:基于低维表示的大规模实体关系挖掘技术.pdf‎|slides]] || Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/29 || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/13  || Miao Fan|| The Frontier of Knowledge Embedding （英文报告）|| [[媒体文件:The_Frontier_of_Knowledge_Embedding.pdf‎|slides]]|| Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/20  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/27  || Li Yi || Phonemes, Features, and Syllables: Converting Onset and Rime Inventories to Consonants and Vowels||[[媒体文件:Lanzhou Phonemes, Features, and Syllables- fianl.pdf|paper]] [[媒体文件:Syllables and phonemes - 20141027.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/3   || 米吉提|| Automatic Speech Recognition of Agglutinative Language based on Lexicon Optimization||[[媒体文件:Mijit-slides-清华大学-2014-11-3.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/10  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/17  ||Dong Wang || Highly restricted keyword spotting for Uyghur using sparse analysis|| [[媒体文件:Highly Restricted Keyword Selection Based on Sparse Analysis.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/24  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/1  ||ZhongDa Xie ||Incorporating Fine-Grained Ontological Relations in Medical Document Ranking || [[媒体文件:Fine-grained_relations.pdf|slides]]|| Lantian Li &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/8  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/15  || 唐国瑜 || 跨语言话题分析关键技术研究 ||[[媒体文件:141205-答辩-TGY.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/22  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/29  || Askar || Language Mismatch in Speaker Recognition System||[[媒体文件:141229--askar.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/5  ||Lantian Li || Deep Neural Networks for Speaker Recognition || [[媒体文件:150104_Deep_Neural_Networks_for_Speaker_Recognition_LLT.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/12  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/19  || Dong Wang || Machine Learning Paradigms for Speech Recognition||[[媒体文件:Machine Learning Paradigms for Speech Recognition.pdf|slides]]  [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6423821 paper] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/26  || Chen Guorong || Information Transmission and Distribution on Web ||[[媒体文件:An_introduction_of_complex_network1.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot; |2015/3/9 || Dong Wang || Joint Deep Learning || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/3/16  || Dongxu Zhang || Knowledge learning from text data and knowledge bases || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/4/13  || Xuewei Zhang || Lasso-based Reverberation Suppression In Automatic Speech Recognition || [[媒体文件:Lasso-based Reverberation Suppression In Automatic Speech Recognition.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/5/11  || Dong Wang ||ASR and SID Research Frontier ||[[媒体文件:ASR and SID Research Frontier.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/23  || Zhiyuan Tang|| CTC learning|| [[媒体文件:CTC.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/30  || Mengyuan Zhao|| CNN-based music removal|| [[媒体文件:Music Removal by Convolutional Denoising.pdf | slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/3  || Zhiyuan Tang|| Networks of Memory|| [[媒体文件:Memory_net.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/7  || Yiqiao Pan|| Document Classification with Spherical Word Vectors||[[媒体文件:Document Classification with Spherical Word Vectors.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/14  || Dong Wang || Transfer Learning for Speech and Language Processing ||[[媒体文件:Transfer_Learning_for_Speech_and_Language_Processing.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/21  || Qixin Wang || Attention for poem generation ||[[媒体文件:Ijcai 2016.pptx|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/28  || Lantian Li || Max-margin metric learning for speaker recognition || [[媒体文件:Max-margin-Metric-Learning.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/4  || Zhiyong Zhang || Parallel training,MPE and natural gradient||[[媒体文件:20160104_张之勇_Large-scale Parallel Training in Speech Recognition.pdf|slides]]||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/18  || Dongxu Zhang || Memoryless Document Vector ||[[媒体文件:Memoryless_document_vector.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/14  || Zhiyuan Tang|| Oral presentation for &amp;quot;vMF-SNE: Embedding for Spherical Data&amp;quot;|| [[媒体文件:embedding.pdf|slides]] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/28  || Tianyi Luo || Review for Neural QA || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/29/CSLT_Weekly_Report--20160328.pdf slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/4/11  || Rong Liu || Recommendation in Youku || [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Cslt%E5%AE%9E%E9%AA%8C%E5%AE%A4%E4%BA%A4%E6%B5%81.pptx slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/09 || Miao Fan || Learning contextual embeddings of knowledge base with entity descriptions.|| [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9c/Techreport_CSLT_2016_M.F..pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/16 || Yang Wang || Research on conversation thread detection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%AA%E6%B4%8B-%E6%AF%95%E8%AE%BE-CSLT.pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20 || Yang Wang &amp;amp;  Maoning Wang || Research on portfolio selection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/89/%E6%B1%AA%E6%B4%8B-%E9%87%91%E8%9E%8D%E7%AC%AC%E4%B8%80%E6%AC%A1%E5%88%86%E4%BA%AB.pdf slides1]  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%87%E6%8A%A5_%E8%B5%84%E4%BA%A7%E7%BB%84%E5%90%88%E4%B8%AD%E5%87%A0%E4%B8%AA%E8%AF%84%E4%BB%B7%E6%8C%87%E6%A0%87%E7%9A%84%E8%A7%A3%E9%87%8A.pdf slides2]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20  || Zhiyuan Tang || ICASSP 2016 summary || [[媒体文件:Note icassp16.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/23 || Dong Wang || graphical model and neural model || [[媒体文件:Graphic Model and Neural Model.pdf|slides]] [[媒体文件:Generative-Pdf.rar|papers]]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/02 || Zhiyuan Tang || Visualizing, Measuring and Understanding Neural Networks: A Brief Survey|| [[媒体文件:Nn analysis.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/03 || Yang Wang || Neural networks and genetic programming for financial forecasting || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/79/GeneticNN.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/05 || Yang Wang || Reinforcement Learning Models and Simulations || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/ca/RRL_and_sim.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/08 || April Pu || SOFTWARE DEVELIPMENT METHODOLOGIES || [http://wangd.cslt.org/talks/pdf/april_software.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/12 || Yang Wang || Generative Adversarial Nets || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c9/Generative_adversarial_network.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/22 || Zhiyuan Tang || INTERSPEECH 2016 summary || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/65/Interspeech16_review.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/30 || Dong Wang || Deep and sparse learning in speech and language: an overview || [http://wangd.cslt.org/talks/pdf/bics2016.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/2/17 || Yang Wang || Review understanding deep learning requires rethinking generalization || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3b/Review_understanding_deep_learning_requires_rethinking_generalization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/5 || Dong Wang || Deep speech factorization || [http://wangd.cslt.org/talks/pdf/Deep-Speech-Factorization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/12 || Shiyue Zhang || Memory-augmented Neural Machine Translation || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/36/Memory-augmented_Neural_Machine_Translation_.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Memory-augmented_Neural_Machine_Translation_.pptx</id>
		<title>文件:Memory-augmented Neural Machine Translation .pptx</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Memory-augmented_Neural_Machine_Translation_.pptx"/>
				<updated>2017-06-24T01:00:36Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：Memory-augmented_Neural_Machine_Translation report&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Memory-augmented_Neural_Machine_Translation report&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Weekly_meeting</id>
		<title>Weekly meeting</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Weekly_meeting"/>
				<updated>2017-06-24T00:56:49Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*Location: FIT-1-304&lt;br /&gt;
*Time: Monday, 7:00 PM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Speaker!! Title !! Materials !! On duty&lt;br /&gt;
|-&lt;br /&gt;
| 2012/08/27  ||Dong Wang  || Heterogeneous Convolutive Non-negative Sparse Coding ||[[媒体文件:Heterogeneous_convolutive_non-negative_sparse_coding.pdf|slides]] [http://homepages.inf.ed.ac.uk/v1dwang2/public/pdf/inerspeech2012-hetero.pdf paper] ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/03  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/10  || NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/09/17  ||WALEED ABDULLA||Auditory Based Feature Vectors for Speech Recognition ||[[媒体文件:AuditoryBasedFeatureVectors.pdf|slides]]||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2012/09/24  ||刘超|| N-gram FST indexing for Spoken Term Detection || [[媒体文件:120924-N_gram_FST_indexing_for_Spoken_Term_Detection-LC-0.pdf|slides]] ||尹聪&lt;br /&gt;
|-&lt;br /&gt;
|范淼||Micro-blogging, Wikipedia, Folksonomy, What's Next? ||[[媒体文件:120924-Micro-blogging, Wikipedia, Folksonomy, What's Next-FM--01-FM-.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/08 ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| 2012/10/15  ||NO Meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/10/22||Wu Xiaojun||speaker recognition in CSLT ||[[媒体文件:VPR_in_CSLT.pdf|slides]]||卡尔&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/10/29  ||王军||An overview of Automatic Speaker Diarization Systems || [[媒体文件:121027-Speaker Diarization-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/05  ||别凡虎||Experiments on Emotional Speaker Recognition||[[媒体文件:121104-Experiments_on_Emotional_Speaker_Recognition-BFH.pdf|slides]] ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/12  ||唐国瑜||Statistical Word Sense Improves Document Clustering ||[[媒体文件:121112_Statistical_Word_Sense_Improves_Document_Clustering_TGY.pdf‎ |slides]]||邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/19  ||张陈昊||TDSR with Long-term Features Based on Functional Data Analysis||[[媒体文件:121118-ISCSLP-FDA_SR-ZCH.pdf|slides]] ||王俊俊&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/11/26  ||王琳琳||Time-Varying Speaker Recognition: An Introduction||[[媒体文件:121126-Time_Varying_Speaker_Recognition_I-Wll.pdf‎|slides]] ||龚宬&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/03  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/10  ||No meeting|| || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/12/17  ||No meeting|| || ||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|2012/01/07  ||王军||基于DF-MAP的说话人模型训练方法||[[媒体文件:130107-基于DFMAP的说话人模型训练方法-WJ.pdf|slides]] ||唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2012/01/14  ||王东|| Computing in CSLT ||[[媒体文件:Computing_in_CSLT.pdf|slides]] ||王琳琳&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/04  ||王军||Sequential Adaptive Learning for Speaker Verification ||[[媒体文件:130301-Sequential adaptive learning for speaker verification-WJ.pdf|slides]] ||别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/11  || Du Jinle|| VAD stuff || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/18  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/03/25  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/08  || 张陈昊|| A Fishervoice based Feature Fusion Method for SUSR ||[[媒体文件:130408-FisherVoice-ZCH.pdf|slides]] ||谢仲达&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/15  ||龚宬|| An Exploration on Influence Factors of VAD's Performance in Speaker Recognition ||[[媒体文件:130415-An_Exploration_on_Influence_Factors_of_VAD-GC.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/22  ||王俊俊 || Understanding the Query: THCIB and THUIS at NTCIR-10 Intent Task ||[[媒体文件:130422-Understanding_the_Query-WJJ.pdf|slides‎]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/04/29  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/06  ||别凡虎 ||MLLR on Emotional Speaker Recognition ||[[媒体文件:130506-MLLR on Emotional Speaker Recognition-BFH.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/13  ||刘超 || The Use of Deep Neural Network for Speech Recognition || [[媒体文件:130513-the_use_of_dnn_for_asr-lc.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/05/27  ||王琳琳|| 说话人识别中的时变鲁棒性问题研究 || [[媒体文件:130527-TVSV-Wll.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/03  ||王俊俊|| 汉语搜索结果聚类系统研究与实现 || [[媒体文件:130601-毕业答辩-02-WJJ.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/10  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/17  ||范淼 || Relation Extraction ||[[媒体文件:130617-relation_extraction-fm.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/06/24  ||唐国瑜 || Incorporating Statistical Word Senses in Topic Model  ||[[媒体文件:130624_Incorporating Statistical Word Senses in Topic Model_TGY.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/01  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/08  ||  || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/07/15  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/09  ||王东 || Research Frontier in Speech Technology||[[媒体文件:Research Frontier in Speech Technology.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/16  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/09/30  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/14  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/21  ||范淼 ||Transduction Classification with Matrix Completion （中文报告）||[[媒体文件: Transduction_Classifiction_with_Matrix_Completion.pdf‎|slides]] [http://pages.cs.wisc.edu/~jerryzhu/pub/mc4ssl_FINAL.pdf paper]|| 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/10/28  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/04  || 王军 || 基于i-vector的intersession补偿及打分方法(综述) || [[媒体文件:131104-ivecto下intersession补偿及打分方法--01-WJ-.pdf‎|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/11  ||张陈昊 ||PLDA介绍及PLDA在说话人识别中的应用 ||[[媒体文件:PLDA.pdf|slides]] || 唐国瑜&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/18  ||别凡虎 ||i-vector理论介绍（讨论）||[[媒体文件:131118-i-vector_and_GMM-UBM-BFH.pdf|slides]]‎  ||王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/11/25  ||刘超 || Pruning Neural Networks By Optimal Brain Damage(综述)||[[媒体文件:131125-OBD-LC-01.pdf|slides]] ||范淼&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/02  ||范淼 ||Distant Supervision for Relation Extraction with Matrix Completion （英文报告）||[[媒体文件:131202-DRMC-FM-01.pdf|slides]] || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/09  || Dong Wang|| Introduction to the HMM-based speech synthesis||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/16  ||张陈昊 ||语音研究中的基元介绍 ||[[媒体文件:131215-Phonology-ZCH.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || Dong Wang|| Introduction to the HMM-based speech synthesis (2)||[http://hts.sp.nitech.ac.jp/archives/2.2/HTS_Slides.zip slides] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/23  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2013/12/30  ||刘荣 || continuous space language model||[[媒体文件:Cslm-cslt.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/06  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/13  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/01/20  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/02/24  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/03  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/10  ||范淼|| Distant Supervision for Information Extraction (英文报告)|| || 李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/17  ||唐国瑜 || Topic Models Incorporating Statistical Word Senses || [[媒体文件:TMISWS_For_CICLing2014.pdf|slides]]||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/24  ||孟祥涛 || Noisy training for Deep Neural Networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/03/31  ||范淼|| Translating Embeddings for Modeling Multi-relational Data （中文报告） || [https://www.hds.utc.fr/everest/lib/exe/fetch.php?id=en%3Atranse&amp;amp;cache=cache&amp;amp;media=en:cr_paper_nips13.pdf paper]||李蓝天&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/07  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/14  || Wang Jun|| I-vector and PLDA in depth ||[[媒体文件:131104-ivector-microsoft-wj.pdf|slides]]  ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/21  || 邱晗||汉语事件句式规范化处理 ||[[媒体文件:140421-汉语事件句式规范化-QH.pdf‎|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/04/28  || 唐国瑜|| Some papers in　CICLing2014 ||[[媒体文件:Some_papers_in_CICling2014.pdf|slides]]  ||刘超&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/05  || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/05/12  || 卡尔|| paper introduction || [[媒体文件:Acoustic Factor Analysis.pdf|slides]] || 邱晗&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot;|2014/05/19  || 邱晗|| 汉语事件句式CCG推导树重构 ||[[媒体文件:140519-CCG_reConstruction.pdf‎|slides]]‎|| 卡尔&lt;br /&gt;
|-&lt;br /&gt;
|Liu Chao|| master proposal: sparse and deep neural networks || [[媒体文件:140519-proposal-LC-01.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| || Liu Chao|| 2nd master proposal: sparse and deep neural networks|| ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/16  || 别凡虎 || Truncated Wave based VPR and Some Recent Work || [[媒体文件:140614-Truncated_Speech_based_VPR.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/06/23  || 别凡虎 || Block-wise training for I-vector || [[媒体文件:140623-Block-wise training for I-vector.pdf‎|slides]]‎ || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/07/07||王军 ||Discriminative Scoring for Speaker Recognition Based on I-vectors || [[媒体文件:140707-work_report.pdf|slides]]|| 王军&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/01|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/09 ||别凡虎 ||Reseach on Truncated Wave based VPR||[[媒体文件:140909-Truncated Speech based VPR.pdf|slides]] || 别凡虎&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/15|| || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/09/22  || Miao Fan|| Large-scale Entity Relation Extraction based on Low-dimensional Representations (中文报告，博士开题)&lt;br /&gt;
||[[媒体文件:基于低维表示的大规模实体关系挖掘技术.pdf‎|slides]] || Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;| 2014/09/29 || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/13  || Miao Fan|| The Frontier of Knowledge Embedding （英文报告）|| [[媒体文件:The_Frontier_of_Knowledge_Embedding.pdf‎|slides]]|| Lan TianLi&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/20  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/27  || Li Yi || Phonemes, Features, and Syllables: Converting Onset and Rime Inventories to Consonants and Vowels||[[媒体文件:Lanzhou Phonemes, Features, and Syllables- fianl.pdf|paper]] [[媒体文件:Syllables and phonemes - 20141027.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/3   || 米吉提|| Automatic Speech Recognition of Agglutinative Language based on Lexicon Optimization||[[媒体文件:Mijit-slides-清华大学-2014-11-3.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/10  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/17  ||Dong Wang || Highly restricted keyword spotting for Uyghur using sparse analysis|| [[媒体文件:Highly Restricted Keyword Selection Based on Sparse Analysis.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/11/24  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/1  ||ZhongDa Xie ||Incorporating Fine-Grained Ontological Relations in Medical Document Ranking || [[媒体文件:Fine-grained_relations.pdf|slides]]|| Lantian Li &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/8  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/15  || 唐国瑜 || 跨语言话题分析关键技术研究 ||[[媒体文件:141205-答辩-TGY.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/22  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/12/29  || Askar || Language Mismatch in Speaker Recognition System||[[媒体文件:141229--askar.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/5  ||Lantian Li || Deep Neural Networks for Speaker Recognition || [[媒体文件:150104_Deep_Neural_Networks_for_Speaker_Recognition_LLT.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/12  || || || || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/19  || Dong Wang || Machine Learning Paradigms for Speech Recognition||[[媒体文件:Machine Learning Paradigms for Speech Recognition.pdf|slides]]  [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6423821 paper] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/1/26  || Chen Guorong || Information Transmission and Distribution on Web ||[[媒体文件:An_introduction_of_complex_network1.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot; |2015/3/9 || Dong Wang || Joint Deep Learning || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/3/16  || Dongxu Zhang || Knowledge learning from text data and knowledge bases || [[媒体文件:Joint Deep Learning.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/4/13  || Xuewei Zhang || Lasso-based Reverberation Suppression In Automatic Speech Recognition || [[媒体文件:Lasso-based Reverberation Suppression In Automatic Speech Recognition.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/5/11  || Dong Wang ||ASR and SID Research Frontier ||[[媒体文件:ASR and SID Research Frontier.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/23  || Zhiyuan Tang|| CTC learning|| [[媒体文件:CTC.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/11/30  || Mengyuan Zhao|| CNN-based music removal|| [[媒体文件:Music Removal by Convolutional Denoising.pdf | slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/3  || Zhiyuan Tang|| Networks of Memory|| [[媒体文件:Memory_net.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/7  || Yiqiao Pan|| Document Classification with Spherical Word Vectors||[[媒体文件:Document Classification with Spherical Word Vectors.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/14  || Dong Wang || Transfer Learning for Speech and Language Processing ||[[媒体文件:Transfer_Learning_for_Speech_and_Language_Processing.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/21  || Qixin Wang || Attention for poem generation ||[[媒体文件:Ijcai 2016.pptx|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2015/12/28  || Lantian Li || Max-margin metric learning for speaker recognition || [[媒体文件:Max-margin-Metric-Learning.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/4  || Zhiyong Zhang || Parallel training,MPE and natural gradient||[[媒体文件:20160104_张之勇_Large-scale Parallel Training in Speech Recognition.pdf|slides]]||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/1/18  || Dongxu Zhang || Memoryless Document Vector ||[[媒体文件:Memoryless_document_vector.pdf|slides]]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/14  || Zhiyuan Tang|| Oral presentation for &amp;quot;vMF-SNE: Embedding for Spherical Data&amp;quot;|| [[媒体文件:embedding.pdf|slides]] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/3/28  || Tianyi Luo || Review for Neural QA || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/29/CSLT_Weekly_Report--20160328.pdf slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/4/11  || Rong Liu || Recommendation in Youku || [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Cslt%E5%AE%9E%E9%AA%8C%E5%AE%A4%E4%BA%A4%E6%B5%81.pptx slides] ||  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/09 || Miao Fan || Learning contextual embeddings of knowledge base with entity descriptions.|| [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9c/Techreport_CSLT_2016_M.F..pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/16 || Yang Wang || Research on conversation thread detection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%AA%E6%B4%8B-%E6%AF%95%E8%AE%BE-CSLT.pdf slides]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20 || Yang Wang &amp;amp;  Maoning Wang || Research on portfolio selection. || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/89/%E6%B1%AA%E6%B4%8B-%E9%87%91%E8%9E%8D%E7%AC%AC%E4%B8%80%E6%AC%A1%E5%88%86%E4%BA%AB.pdf slides1]  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/%E6%B1%87%E6%8A%A5_%E8%B5%84%E4%BA%A7%E7%BB%84%E5%90%88%E4%B8%AD%E5%87%A0%E4%B8%AA%E8%AF%84%E4%BB%B7%E6%8C%87%E6%A0%87%E7%9A%84%E8%A7%A3%E9%87%8A.pdf slides2]|| &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/20  || Zhiyuan Tang || ICASSP 2016 summary || [[媒体文件:Note icassp16.pdf|slides]] ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/5/23 || Dong Wang || graphical model and neural model || [[媒体文件:Graphic Model and Neural Model.pdf|slides]] [[媒体文件:Generative-Pdf.rar|papers]]  || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/02 || Zhiyuan Tang || Visualizing, Measuring and Understanding Neural Networks: A Brief Survey|| [[媒体文件:Nn analysis.pdf|slides]] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/8/03 || Yang Wang || Neural networks and genetic programming for financial forecasting || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/79/GeneticNN.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/05 || Yang Wang || Reinforcement Learning Models and Simulations || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/ca/RRL_and_sim.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/08 || April Pu || SOFTWARE DEVELIPMENT METHODOLOGIES || [http://wangd.cslt.org/talks/pdf/april_software.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/12 || Yang Wang || Generative Adversarial Nets || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c9/Generative_adversarial_network.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/22 || Zhiyuan Tang || INTERSPEECH 2016 summary || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/65/Interspeech16_review.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2016/11/30 || Dong Wang || Deep and sparse learning in speech and language: an overview || [http://wangd.cslt.org/talks/pdf/bics2016.pptx slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/2/17 || Yang Wang || Review understanding deep learning requires rethinking generalization || [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3b/Review_understanding_deep_learning_requires_rethinking_generalization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/5 || Dong Wang || Deep speech factorization || [http://wangd.cslt.org/talks/pdf/Deep-Speech-Factorization.pdf slides] || &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2017/6/12 || Shiyue Zhang || Memory-augmented Neural Machine Translation || [ slides] || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-19</id>
		<title>NLP Status Report 2017-6-19</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-19"/>
				<updated>2017-06-19T00:44:03Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：以“{| class=&amp;quot;wikitable&amp;quot; !Date !! People !! Last Week !! This Week |- | rowspan=&amp;quot;6&amp;quot;|2017/6/19 |Jiyuan Zhang || ||  |- |Aodong LI || || |- |Shiyue Zhang ||  * finished th...”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/6/19&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* finished the paper of AP17&lt;br /&gt;
||&lt;br /&gt;
* share Google's new paper&lt;br /&gt;
* deliver the NMT baseline code &lt;br /&gt;
* deliver the M-NMT code &lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2017-6-19</id>
		<title>2017-6-19</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2017-6-19"/>
				<updated>2017-06-19T00:34:47Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：以“NLP Status Report 2017-6-19  ASR Status Report 2017-6-19  FIN Status Report 2017-6-19”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[NLP Status Report 2017-6-19]]&lt;br /&gt;
&lt;br /&gt;
[[ASR Status Report 2017-6-19]]&lt;br /&gt;
&lt;br /&gt;
[[FIN Status Report 2017-6-19]]&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Status_report</id>
		<title>Status report</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Status_report"/>
				<updated>2017-06-19T00:34:17Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[2017-6-19]]&lt;br /&gt;
&lt;br /&gt;
[[2017-6-12]]&lt;br /&gt;
&lt;br /&gt;
[[2017-6-5]]&lt;br /&gt;
&lt;br /&gt;
[[2017-5-31]]&lt;br /&gt;
&lt;br /&gt;
[[2017-5-22]]&lt;br /&gt;
&lt;br /&gt;
[[2017-5-15]]&lt;br /&gt;
&lt;br /&gt;
[[2017-5-8]]&lt;br /&gt;
&lt;br /&gt;
[[2017-5-2]]&lt;br /&gt;
&lt;br /&gt;
[[2017-4-24]]&lt;br /&gt;
&lt;br /&gt;
[[2017-4-17]]&lt;br /&gt;
&lt;br /&gt;
[[2017-4-10]]&lt;br /&gt;
&lt;br /&gt;
[[2017-4-5]]&lt;br /&gt;
&lt;br /&gt;
[[2017-3-27]]&lt;br /&gt;
&lt;br /&gt;
[[2017-3-20]]&lt;br /&gt;
&lt;br /&gt;
[[2017-3-13]]&lt;br /&gt;
&lt;br /&gt;
[[2017-3-6]]&lt;br /&gt;
&lt;br /&gt;
[[2017-2-27]]&lt;br /&gt;
&lt;br /&gt;
[[2017-2-20]]&lt;br /&gt;
&lt;br /&gt;
[[2017-2-13]]&lt;br /&gt;
&lt;br /&gt;
[[2017-2-6]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-30]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-23]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-16]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-10]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-3]]&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-12</id>
		<title>NLP Status Report 2017-6-12</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-12"/>
				<updated>2017-06-19T00:33:34Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：以“{| class=&amp;quot;wikitable&amp;quot; !Date !! People !! Last Week !! This Week |- | rowspan=&amp;quot;6&amp;quot;|2017/6/5 |Jiyuan Zhang || ||  |- |Aodong LI || || |- |Shiyue Zhang ||  * trained mem...”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/6/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* trained mem model without EOS and UNK, and got better performance&lt;br /&gt;
* wrote introduction and related works of paper&lt;br /&gt;
||&lt;br /&gt;
* do more experiments and write paper&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-5</id>
		<title>NLP Status Report 2017-6-5</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-5"/>
				<updated>2017-06-05T06:04:02Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/6/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
* Small data:&lt;br /&gt;
  Only make the English encoder's embedding constant -- 45.98&lt;br /&gt;
  Only initialize the English encoder's embedding and then finetune it -- 46.06&lt;br /&gt;
  Share the attention mechanism and then directly add them -- 46.20&lt;br /&gt;
* big data baseline bleu = '''30.83'''&lt;br /&gt;
* Model with three fixed embeddings&lt;br /&gt;
  Shrink output vocab from 30000 to 20000 and best result is 31.53&lt;br /&gt;
  Train the model with 40 batch size and best result until now is 30.63&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
* test more checkpoints on model trained with batch = 40&lt;br /&gt;
* train model with reverse output&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* trained word2vec on big data, and directly used it on NMT, but resulted in quite poor performance&lt;br /&gt;
* trained M-NMT model, got bleu=36.58 (+1.34 than NMT). But found the EOS in mem has a big influence on result:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! NMT&lt;br /&gt;
! 35.24, 57.7/39.8/31.9/27.0 BP=0.939&lt;br /&gt;
|-&lt;br /&gt;
|MNMT (EOS=1)&lt;br /&gt;
| 35.27, 60.0/41.3/33.1/28.0 BP=0.907&lt;br /&gt;
|-&lt;br /&gt;
| MNMT (EOS=0.2)&lt;br /&gt;
| 36.40, 59.1/40.8/32.6/27.4 BP=0.951&lt;br /&gt;
|-&lt;br /&gt;
| MNMT (EOS=0)&lt;br /&gt;
| 36.58, 58.4/40.4/32.1/27.0 BP=0.968&lt;br /&gt;
|}&lt;br /&gt;
* tried to tackle UNK using 36.58 M-NMT,  increased vocab to 50000, got bleu=35.63, 58.6/40.0/31.6/26.4 BP=0.953 (not good, ?)&lt;br /&gt;
* training uy-zh, 50% zh-uy, 25% zh-uy&lt;br /&gt;
* training mem without EOS&lt;br /&gt;
* reviewing related papers&lt;br /&gt;
||&lt;br /&gt;
* solve EOS problem&lt;br /&gt;
* find way to tackle UNK&lt;br /&gt;
* write paper&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-5</id>
		<title>NLP Status Report 2017-6-5</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-5"/>
				<updated>2017-06-05T06:01:33Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/6/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
* Small data:&lt;br /&gt;
  Only make the English encoder's embedding constant -- 45.98&lt;br /&gt;
  Only initialize the English encoder's embedding and then finetune it -- 46.06&lt;br /&gt;
  Share the attention mechanism and then directly add them -- 46.20&lt;br /&gt;
* big data baseline bleu = '''30.83'''&lt;br /&gt;
* Model with three fixed embeddings&lt;br /&gt;
  Shrink output vocab from 30000 to 20000 and best result is 31.53&lt;br /&gt;
  Train the model with 40 batch size and best result until now is 30.63&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
* test more checkpoints on model trained with batch = 40&lt;br /&gt;
* train model with reverse output&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* trained word2vec on big data, and directly used it on NMT, but resulted in quite poor performance&lt;br /&gt;
* trained M-NMT model, got bleu=36.58 (+1.34 than NMT). But found the EOS in mem has a big influence on result:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! NMT&lt;br /&gt;
! 35.24, 57.7/39.8/31.9/27.0 BP=0.939&lt;br /&gt;
|-&lt;br /&gt;
|MNMT (EOS=1)&lt;br /&gt;
| 35.27, 60.0/41.3/33.1/28.0 BP=0.907&lt;br /&gt;
|-&lt;br /&gt;
| MNMT (EOS=0.2)&lt;br /&gt;
| 36.40, 59.1/40.8/32.6/27.4 BP=0.951&lt;br /&gt;
|-&lt;br /&gt;
| MNMT (EOS=0)&lt;br /&gt;
| 36.58, 58.4/40.4/32.1/27.0 BP=0.968&lt;br /&gt;
|}&lt;br /&gt;
* tried to tackle UNK using 36.58 M-NMT,  increased vocab to 50000, got bleu=35.63, 58.6/40.0/31.6/26.4 BP=0.953 (not good, ?)&lt;br /&gt;
* training uy-zh, 50% zh-uy, 25% zh-uy&lt;br /&gt;
* training mem without EOS&lt;br /&gt;
* reviewing related papers&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-5</id>
		<title>NLP Status Report 2017-6-5</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-5"/>
				<updated>2017-06-05T05:56:30Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/6/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
* Small data:&lt;br /&gt;
  Only make the English encoder's embedding constant -- 45.98&lt;br /&gt;
  Only initialize the English encoder's embedding and then finetune it -- 46.06&lt;br /&gt;
  Share the attention mechanism and then directly add them -- 46.20&lt;br /&gt;
* big data baseline bleu = '''30.83'''&lt;br /&gt;
* Model with three fixed embeddings&lt;br /&gt;
  Shrink output vocab from 30000 to 20000 and best result is 31.53&lt;br /&gt;
  Train the model with 40 batch size and best result until now is 30.63&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
* test more checkpoints on model trained with batch = 40&lt;br /&gt;
* train model with reverse output&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* trained word2vec on big data, and directly used it on NMT, but resulted in quite poor performance&lt;br /&gt;
* trained M-NMT model, got bleu=36.58 (+1.34 than NMT). But found the EOS in mem has a big influence on result:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! NMT&lt;br /&gt;
! 35.24, 57.7/39.8/31.9/27.0 BP=0.939&lt;br /&gt;
|-&lt;br /&gt;
|MNMT (EOS=1)&lt;br /&gt;
| 35.27, 60.0/41.3/33.1/28.0 BP=0.907&lt;br /&gt;
|-&lt;br /&gt;
| MNMT (EOS=0.2)&lt;br /&gt;
| 36.40, 59.1/40.8/32.6/27.4 BP=0.951&lt;br /&gt;
|-&lt;br /&gt;
| MNMT (EOS=0)&lt;br /&gt;
| 36.58, 58.4/40.4/32.1/27.0 BP=0.968&lt;br /&gt;
|}&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-5</id>
		<title>NLP Status Report 2017-6-5</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-6-5"/>
				<updated>2017-06-05T05:52:56Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/6/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
* Small data:&lt;br /&gt;
  Only make the English encoder's embedding constant -- 45.98&lt;br /&gt;
  Only initialize the English encoder's embedding and then finetune it -- 46.06&lt;br /&gt;
  Share the attention mechanism and then directly add them -- 46.20&lt;br /&gt;
* big data baseline bleu = '''30.83'''&lt;br /&gt;
* Model with three fixed embeddings&lt;br /&gt;
  Shrink output vocab from 30000 to 20000 and best result is 31.53&lt;br /&gt;
  Train the model with 40 batch size and best result until now is 30.63&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
* test more checkpoints on model trained with batch = 40&lt;br /&gt;
* train model with reverse output&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* trained word2vec on big data, and directly used it on NMT, but resulted in quite poor performance&lt;br /&gt;
* trained M-NMT model, got bleu=36.58 (+1.34 than NMT). But found the EOS in mem has a big influence on result:&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31</id>
		<title>NLP Status Report 2017-5-31</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31"/>
				<updated>2017-05-31T04:22:41Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/5/31&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* found dropout bug, fix it, and reran baseline: baseline 35.21, baseline(outproj=emb) 35.24 &lt;br /&gt;
* tried several embed set models, failed&lt;br /&gt;
* embedded other words to model embedding space (trained on train data not big data), and then directly used in baseline(outproj=emb) &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! 30000&lt;br /&gt;
! 50000&lt;br /&gt;
! 70000&lt;br /&gt;
! 90000&lt;br /&gt;
|-&lt;br /&gt;
| 35.24&lt;br /&gt;
| 34.52&lt;br /&gt;
| 33.73&lt;br /&gt;
| 33.16&lt;br /&gt;
|-&lt;br /&gt;
| 4564 (6666)&lt;br /&gt;
| 4535&lt;br /&gt;
| 4469&lt;br /&gt;
| 4426&lt;br /&gt;
|}&lt;br /&gt;
* m-nmt is running&lt;br /&gt;
||&lt;br /&gt;
* get word2vec on big data, and compare with word2vec from train data&lt;br /&gt;
* test m-nmt model, increase vocab size and test&lt;br /&gt;
* review zh-uy/uy-zh related works, start to write paper&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31</id>
		<title>NLP Status Report 2017-5-31</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31"/>
				<updated>2017-05-31T04:21:29Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/5/22&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* found dropout bug, fix it, and reran baseline: baseline 35.21, baseline(outproj=emb) 35.24 &lt;br /&gt;
* tried several embed set models, failed&lt;br /&gt;
* embedded other words to model embedding space (trained on train data not big data), and then directly used in baseline(outproj=emb) &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! 30000&lt;br /&gt;
! 50000&lt;br /&gt;
! 70000&lt;br /&gt;
! 90000&lt;br /&gt;
|-&lt;br /&gt;
| 35.24&lt;br /&gt;
| 34.52&lt;br /&gt;
| 33.73&lt;br /&gt;
| 33.16&lt;br /&gt;
|-&lt;br /&gt;
| 4564 (6666)&lt;br /&gt;
| 4535&lt;br /&gt;
| 4469&lt;br /&gt;
| 4426&lt;br /&gt;
|}&lt;br /&gt;
* m-nmt is running&lt;br /&gt;
||&lt;br /&gt;
* get word2vec on big data, and compare with word2vec from train data&lt;br /&gt;
* test m-nmt model, increase vocab size and test&lt;br /&gt;
* review zh-uy/uy-zh related works, start to write paper&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31</id>
		<title>NLP Status Report 2017-5-31</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31"/>
				<updated>2017-05-31T04:17:24Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/5/22&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* found dropout bug, fix it, and reran baseline: baseline 35.21, baseline(outproj=emb) 35.24 &lt;br /&gt;
* tried several embed set models, failed&lt;br /&gt;
* embedded other words to model embedding space (trained on train data not big data), and then directly used in baseline(outproj=emb) &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! 30000&lt;br /&gt;
! 50000&lt;br /&gt;
! 70000&lt;br /&gt;
! 90000&lt;br /&gt;
|-&lt;br /&gt;
| 35.24&lt;br /&gt;
| 34.52&lt;br /&gt;
| 33.73&lt;br /&gt;
| 33.16&lt;br /&gt;
|-&lt;br /&gt;
| 4564 (6666)&lt;br /&gt;
| 4535&lt;br /&gt;
| 4469&lt;br /&gt;
| 4426&lt;br /&gt;
|}&lt;br /&gt;
* m-nmt is running&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31</id>
		<title>NLP Status Report 2017-5-31</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31"/>
				<updated>2017-05-31T04:16:19Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/5/22&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* found dropout bug, fix it, and reran baseline: baseline 35.21, baseline(outproj=emb) 35.24 &lt;br /&gt;
* tried several embed set models, failed&lt;br /&gt;
* embedded other words to model embedding space (trained on train data not big data), and then directly used in baseline(outproj=emb) &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! 30000&lt;br /&gt;
! 50000&lt;br /&gt;
! 70000&lt;br /&gt;
! 90000&lt;br /&gt;
|-&lt;br /&gt;
| 35.24&lt;br /&gt;
| 34.52&lt;br /&gt;
| 33.73&lt;br /&gt;
| 33.16&lt;br /&gt;
|-&lt;br /&gt;
| 4564 (6666)&lt;br /&gt;
| 4535&lt;br /&gt;
| 4469&lt;br /&gt;
| 4426&lt;br /&gt;
|}&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31</id>
		<title>NLP Status Report 2017-5-31</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-31"/>
				<updated>2017-05-31T04:11:55Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：以“{| class=&amp;quot;wikitable&amp;quot; !Date !! People !! Last Week !! This Week |- | rowspan=&amp;quot;6&amp;quot;|2017/5/22 |Jiyuan Zhang || ||  |- |Aodong LI ||  ||  |- |Shiyue Zhang ||  * found dro...”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/5/22&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* found dropout bug, fix it, and reran baseline: baseline 35.21, baseline(outproj=emb) 35.24 &lt;br /&gt;
* tried several embed set models, failed&lt;br /&gt;
* embedded other words to model embedding space (trained on train data not big data), and then directly used in baseline(outproj=emb) &lt;br /&gt;
** 30000   35.24&lt;br /&gt;
** &lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-22</id>
		<title>NLP Status Report 2017-5-22</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-22"/>
				<updated>2017-05-24T01:54:33Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/5/22&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* tried to not train embedding but use external word vectors&lt;br /&gt;
* most results of my attempts are bad, only 3-layer rnn + no dropout model got 25.54 bleu which about 2 points worse than original baseline&lt;br /&gt;
* trained original baseline on new data ( the data fixed the reverse sentence problem), got bleu=27.88; moses bleu=32.47&lt;br /&gt;
||&lt;br /&gt;
* try more models to get similar results as original baseline on new data&lt;br /&gt;
* m-nmt model on new data &lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
* learn the implement of seq2seq model&lt;br /&gt;
* read tf_translate code&lt;br /&gt;
||&lt;br /&gt;
* understand the meaning of main code&lt;br /&gt;
* start writing documents&lt;br /&gt;
|-&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-22</id>
		<title>NLP Status Report 2017-5-22</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-22"/>
				<updated>2017-05-22T03:29:19Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/4/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* tried to not train embedding but use external word vectors&lt;br /&gt;
* most results of my attempts are bad, only 3-layer rnn + no dropout model got 25.54 bleu which about 2 points worse than original baseline&lt;br /&gt;
* trained original baseline on new data ( the data fixed the reverse sentence problem), got bleu=27.88; moses bleu=32.47&lt;br /&gt;
||&lt;br /&gt;
* try more models to get similar results as original baseline on new data&lt;br /&gt;
* m-nmt model on new data &lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-22</id>
		<title>NLP Status Report 2017-5-22</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-22"/>
				<updated>2017-05-22T03:29:03Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：以“{| class=&amp;quot;wikitable&amp;quot; !Date !! People !! Last Week !! This Week |- | rowspan=&amp;quot;6&amp;quot;|2017/4/5 |Jiyuan Zhang || ||  |- |Aodong LI ||  || |- |Shiyue Zhang ||  * tried to no...”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/4/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* tried to not train embedding but use external word vectors&lt;br /&gt;
* most results of my attempts are bad, only 3-layer rnn + no dropout model got 25.54 bleu which about 2 points worse than original baseline&lt;br /&gt;
* trained original baseline on new data ( the data fixed the reverse sentence problem), got bleu=27.88; moses bleu=32.47&lt;br /&gt;
||&lt;br /&gt;
* try more models to get similar results as original baseline on new data&lt;br /&gt;
* m-nmt model on new data &lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
* configured environment and ran tf_translate code &lt;br /&gt;
* read machine translation paper &lt;br /&gt;
* learned lstm model and seq2seq model&lt;br /&gt;
||&lt;br /&gt;
* learn the implement of seq2seq model&lt;br /&gt;
* read tf_translate code &lt;br /&gt;
* understand the meaning of main code&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Reading_table</id>
		<title>Reading table</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Reading_table"/>
				<updated>2017-05-18T02:51:49Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Speaker!! Materials  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/22  ||Zhang Dong Xu|| Why RNN? [[媒体文件:Why_LSTM.pdf|PPT]] [[媒体文件:Learning_Long-Term_Dependencies_with_Gradient_Descent_is_Difficult.pdf|paper 1]],[[媒体文件:LongShortTermMemory.pdf|paper  2]]&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot;| 2014/12/8 || rowspan='3'|Liu Rong || Yu Zhao, Zhiyuan Liu, Maosong Sun. Phrase Type Sensitive Tensor Indexing Model for Semantic Composition. AAAI'15. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2015_tim.pdf pdf]&lt;br /&gt;
|-&lt;br /&gt;
| Yang Liu, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun. Topical Word Embeddings. AAAI'15. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2015_twe.pdf pdf][https://github.com/largelymfs/topical_word_embeddings code]&lt;br /&gt;
|-&lt;br /&gt;
|  Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu. Learning Entity and Relation Embeddings for Knowledge Graph Completion. AAAI'15. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2015_transr.pdf pdf][https://github.com/mrlyk423/relation_extraction code]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2015/07/10 ||Liu Rong|| &lt;br /&gt;
*Context-Dependent Translation Selection Using Convolutional Neural Network [http://arxiv.org/abs/1503.02357]&lt;br /&gt;
*Syntax-based Deep Matching of Short Texts [http://arxiv.org/abs/1503.02427]&lt;br /&gt;
*Convolutional Neural Network Architectures for Matching Natural Language Sentences[http://www.hangli-hl.com/uploads/3/1/6/8/3168008/hu-etal-nips2014.pdf]&lt;br /&gt;
*LSTM: A Search Space Odyssey [http://arxiv.org/pdf/1503.04069.pdf]&lt;br /&gt;
*A Deep Embedding Model for Co-occurrence Learning  [http://arxiv.org/abs/1504.02824]&lt;br /&gt;
*Text segmentation based on semantic word embeddings[http://arxiv.org/abs/1503.05543]&lt;br /&gt;
*semantic parsing via paraphrashings[http://www.cs.tau.ac.il/research/jonathan.berant/homepage_files/publications/ACL14.pdf]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/07/22 ||Dong Wang|| &lt;br /&gt;
*From Word Embeddings To Document Distances [http://jmlr.org/proceedings/papers/v37/kusnerb15.pdf pdf]&lt;br /&gt;
*[[Asr-read-icml|Reading list for ICML2015]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/07/29 ||Xiaoxi Wang|| &lt;br /&gt;
* Sequence to Sequence Learning with Neural Networks [http://papers.nips.cc/paper/5346-information-based-learning-by-agents-in-unbounded-state-spaces pdf]&lt;br /&gt;
* Neural Machine Translation by Jointly Learning to Align and Translate [http://arxiv.org/abs/1409.0473 pdf]&lt;br /&gt;
|-&lt;br /&gt;
|2015/08/05 ||Tianyi Luo|| &lt;br /&gt;
* A Hierarchical Knowledge Representation for Expert Finding on Social Media(ACL 2015 short paper) [[http://aclanthology.info/papers/a-hierarchical-knowledge-representation-for-expert-finding-on-social-media pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/08/05 ||Dongxu Zhang||&lt;br /&gt;
* Describing Multimedia Content using Attention-based Encoder-Decoder Networks[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e0/Describing_Multimedia_Content_using_Attention-based_Encoder-Decoder_Networks.pdf]&lt;br /&gt;
* Attention-Based Models for Speech Recognition[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/58/Attention-Based_Models_for_Speech_Recognition.pdf] details in speech recognition.&lt;br /&gt;
* Neural Machine Translation by Joint Learning to Align and Translate[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c3/Neural_Machine_Translation_by_Joint_Learning_to_Align_and_Translate.pdf] details in machine translation.&lt;br /&gt;
|-&lt;br /&gt;
|2015/08/07 ||Chao Xing|| &lt;br /&gt;
* Neural Word Embedding as Implicit Matrix Factorization [[http://papers.nips.cc/paper/5477-neural-word-embedding-as-implicit-matrix-factorization.pdf pdf]]&lt;br /&gt;
* Matrix factorization techniques for recommender systems [[http://www.columbia.edu/~jwp2128/Teaching/W4721/papers/ieeecomputer.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/10/14 ||Tianyi Luo, Dongxu Zhang, Chao Xing|| &lt;br /&gt;
* MEMORY NETWORKS(ICLR 2015) [[http://arxiv.org/pdf/1410.3916v10.pdf pdf]]&lt;br /&gt;
* End-To-End Memory Networks(NIPS 2015) [[http://arxiv.org/pdf/1503.08895v4.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/10/20 ||Tianyi Luo, Xiaoxi Wang|| &lt;br /&gt;
* The Kendall and Mallows Kernels for Permutations (ICML 2015) [[http://jmlr.csail.mit.edu/proceedings/papers/v37/jiao15.pdf pdf]]&lt;br /&gt;
* The ordering of expression among a few genes can provide simple cancer biomarkers and signal BRCA1 mutations (BMC Bioinformatics) [[http://www.biomedcentral.com/content/pdf/1471-2105-10-256.pdf pdf]]&lt;br /&gt;
* Reasoning about Entailment with Neural Attention [[http://arxiv.org/pdf/1509.06664v1.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/10/28 ||Lantian Li|| &lt;br /&gt;
* Binary Code Ranking with Weighted Hamming Distance [[http://www.cv-foundation.org/openaccess/content_cvpr_2013/papers/Zhang_Binary_Code_Ranking_2013_CVPR_paper.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/05 || Chao Xing, Xiaoxi Wang||&lt;br /&gt;
* Generative Image Modeling Using Spatial LSTMs [[http://arxiv.org/pdf/1506.03478v2.pdf pdf]]&lt;br /&gt;
* Character-level Convolutional Networks for Text Classification [[http://arxiv.org/pdf/1509.01626.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/20 || qixinWang||&lt;br /&gt;
* Are You Talking to a Machine? [[http://arxiv.org/pdf/1505.05612v3.pdf pdf]]&lt;br /&gt;
* m-RNN [[http://arxiv.org/pdf/1412.6632v5.pdf pdf]]&lt;br /&gt;
* PresentationPPT [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/93/PresentationPaper--QixinWang20151120.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/27 || Xiaoxi Wang||&lt;br /&gt;
* NEURAL PROGRAMMER-INTERPRETERS [[http://arxiv.org/pdf/1511.06279v2.pdf pdf]]&lt;br /&gt;
* Subset Selection by Pareto Optimization [[http://www.researchgate.net/profile/Yang_Yu87/publication/282632653_Subset_Selection_by_Pareto_Optimization/links/561495d908aed47facee68b5.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/27 || Chao Xing ||&lt;br /&gt;
*Random Walks and Neural Network Language Models [[http://www.aclweb.org/anthology/N15-1165 pdf]]&lt;br /&gt;
*SENSEMBED: Learning Sense Embeddings forWord and Relational Similarity[[http://wwwusers.di.uniroma1.it/~navigli/pubs/ACL_2015_Iacobaccietal.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/4 || Dongxu Zhang, Qixin Wang, Chao Xing ||&lt;br /&gt;
*Building a shared world: Mapping distributional to model-theoretic semantic spaces[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/da/Building_a_shared_world.pdf pdf]]&lt;br /&gt;
*Playing Atari with Deep Reinforcement Learning[[http://arxiv.org/pdf/1312.5602v1.pdf pdf]]&lt;br /&gt;
*Word Embedding Revisited A New Representation Learning and Explicit Matrix Factorization Perspective [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/Report-1.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/11 || Chao Xing, Yiqiao Pan ||&lt;br /&gt;
*Semi-Supervised Word Sense Disambiguation Using Word Embeddings in General and Specific Domains [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/57/Report-12-11-02.pdf pdf]]&lt;br /&gt;
*SENSE2VEC - A FAST AND ACCURATE METHOD FOR WORD SENSE DISAMBIGUATION IN NEURAL WORD EMBEDDINGS [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f0/Report-12-11-03.pdf pdf]]&lt;br /&gt;
*Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space[[http://arxiv.org/pdf/1504.06654v1.pdf pdf]]&lt;br /&gt;
*Distributional Semantics in Use[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/97/Report-12-11-01.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/18 || Tianyi Luo, Dongxu Zhang ||&lt;br /&gt;
*Human-level concept learning through probabilistic program induction(Cognitive Science) [[http://cdn1.almosthuman.cn/wp-content/uploads/2015/12/Human-level-concept-learning-through-probabilistic-program-induction.pdf pdf]]&lt;br /&gt;
*Cluster Analysis of Heterogeneous Rank Data(ICML 2007) [[http://machinelearning.wustl.edu/mlpapers/paper_files/icml2007_BusseOB07.pdf pdf]]&lt;br /&gt;
*Building a shared world: Mapping distributional to model-theoretic semantic spaces[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/da/Building_a_shared_world.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/25 || Dongxu zhang, Qixin Wang ||&lt;br /&gt;
*Exploiting Multiple Sources for Open-domain Hypernym Discovery[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/d1/Exploiting_Multiple_Sources_for_Open-domain_Hypernym_Discovery.pdf]]&lt;br /&gt;
*learning semantic hierarchies via word embeddings[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/4f/Learning_semantic_hierarchies_via_word_embeddings_acl2014.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/31 || Xiaoxi Wang， Chao Xing||&lt;br /&gt;
* Multilingual Language Processing From Bytes [[http://arxiv.org/pdf/1512.00103v1.pdf pdf]]&lt;br /&gt;
* Towards universal neural nets: Gibbs machines and ACE. [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3b/Report-12-31.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/8 || Qixin Wang， Tianyi Luo||&lt;br /&gt;
*Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/Unveiling_the_Dreams_of_Word_Embeddings-_Towards_Language-Driven_Image_Generation.pdf pdf]]&lt;br /&gt;
*Generating Chinese Couplets using a Statistical MT Approach[[http://aclweb.org/anthology/C/C08/C08-1048.pdf pdf]]&lt;br /&gt;
*Generating Chinese Classical Poems with Statistical Machine Translation Models[[http://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/viewFile/4753/5314 pdf]]&lt;br /&gt;
*Chinese Poetry Generation with Recurrent Neural Networks[[http://www.aclweb.org/old_anthology/D/D14/D14-1074.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/15 || Chao Xing||&lt;br /&gt;
*Learning from Chris Dyer [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/cd/Learning_From_Chris_Dyer.pptx ppt]]&lt;br /&gt;
*Learning Word Representations with Hierarchical Sparse Coding [[http://arxiv.org/pdf/1406.2035v2.pdf pdf]]&lt;br /&gt;
*Non-distributional Word Vector Representations [[http://www.cs.cmu.edu/~mfaruqui/papers/acl15-nondist.pdf pdf]]&lt;br /&gt;
*Sparse Overcomplete Word Vector Representations [[http://www.cs.cmu.edu/~mfaruqui/papers/acl15-overcomplete.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/22 || Qixin Wang, Tianyi Luo||&lt;br /&gt;
*Skip_thought_vector [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/aa/Skip_thought_vector.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/29 || Dongxu Zhang||&lt;br /&gt;
*Towards Neural Network-based Reasoning[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/cb/Towards_Neural_Network-based_Reasoning.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/3/25 || Jiyuan Zhang||&lt;br /&gt;
*Modeling Temporal Dependencies in High-Dimensional Sequences:Application to Polyphonic Music Generation and Transcription[[http://www-etud.iro.umontreal.ca/~boulanni/ICML2012.pdf pdf]]&lt;br /&gt;
*Composing Music With Recurrent Neural Networks[[http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/   blog]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/4/1 || Chao Xing||&lt;br /&gt;
*Generating Text with Deep Reinforcement Learning[[http://arxiv.org/pdf/1510.09202v1.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/4/8 || Tianyi Luo||&lt;br /&gt;
*Generating Chinese Classical Poems with RNN[[http://nlp.hivefire.com/articles/share/56982/]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/4/28 || Chao Xing||&lt;br /&gt;
*Knowledge Base Completion via Search-Based Question Answering [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b1/Knowledge_Base_Completion_via_Search-Based_Question_Answering_-_Report.pdf]]&lt;br /&gt;
*Open Domain Question Answering via Semantic Enrichment [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/15/Open_Domain_Question_Answering_via_Semantic_Enrichment_-_Report.pdf]]&lt;br /&gt;
*A Neural Conversational Model [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/15/A_Neural_Conversational_Model_-_Report.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/5/11 || Chao Xing||&lt;br /&gt;
*A Hierarchical Recurrent Encoder-Decoder for Generative Context-Aware Query Suggestion &lt;br /&gt;
*A Neural Network Approach to Context-Sensitive Generation of Conversational Responses&lt;br /&gt;
*Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models&lt;br /&gt;
*Neural Responding Machine for Short-Text Conversation&lt;br /&gt;
*Learning from Real Users Rating Dialogue Success with Neural Networks for Reinforcement Learning in Spoken Dialogue Systems&lt;br /&gt;
|-&lt;br /&gt;
|2016/7/28 || Aiting Liu ||&lt;br /&gt;
*Intrinsic Subspace Evaluation of Word Embedding Representations    [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/68/Intrinsic_Subspace_Evaluation_of_Word_Embedding_Representations.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/17/Intrinsic_Subspace_Evaluation_of_Word_Embedding_Representations.pptx slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/4 || &lt;br /&gt;
* Aodong Li&lt;br /&gt;
* Jiyuan Zhang&lt;br /&gt;
* Andi Zhang &lt;br /&gt;
||&lt;br /&gt;
*On the Role of Seed Lexicons in Learning Bilingual Word Embeddings   [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5b/On_the_Role_of_Seed_Lexicons_in_Learning_Bilingual_Word_Embeddings.pdf pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/51/2.0On_the_Role_of_Seed_Lexicons_in_Learning_Bilingual_Word_Embeddings_.pdf slides]]&lt;br /&gt;
*ABCNN- Attention-Based Convolutional Neural Network for Modeling Sentence Pairs &lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/76/ABCNN-_Attention-Based_Convolutional_Neural_Network_for_Modeling_Sentence_Pairs_.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/0d/Attention-Based_Convolutianal_Neural_Network_for_Modeling_Sentence_Pairs.pptx slides]]&lt;br /&gt;
*[[Tutorial]]: Introduction to different LMs: NNLM, RNNLM, continuous bag of words, skip-gram&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/da/Models_for_computing_continuous_vector_representations_of_words.pdf slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/18 || &lt;br /&gt;
* Shiyao Li &lt;br /&gt;
* Aiting Liu &lt;br /&gt;
||&lt;br /&gt;
*Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a4/Multilingual_part-of-speech_tagging_with_bidirectional_long_short-term_memory_models_and_auxiliary_loss.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Multilingual_Part-of-Speech_Tagging_with.pdf slides]]&lt;br /&gt;
*A Sentence Interaction Network for Modeling Dependence between Sentences&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/04/A_Sentence_Interaction_Network_for_Modeling_Dependence_between_Sentences.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6c/A_Sentence_Interaction_Network_for_Modeling_Dependence_between_Sentences.pptx slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/25 || &lt;br /&gt;
* Ziwei Bai &lt;br /&gt;
||&lt;br /&gt;
*[[Tutorial]]: Tensorflow guidelines and some examples&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/32/Tensor_flow_bai.pdf slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/26 || &lt;br /&gt;
* Jiyuan Zhang&lt;br /&gt;
* Shiyao Li &lt;br /&gt;
||&lt;br /&gt;
*[[Tutorial]]: Introduction to GRU, LSTM, RBM [[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:An_overview_of_LSTM,GRU,RBM.pptx slides]]&lt;br /&gt;
* [[Tutorial]] : Linear Algebra, Probability and Information Basics&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f1/LinearAlgebra.pdf LinearAlgebra]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/97/ProbilityTheoryandInformationTheory.pdf ProbabilityAndInformationTheory]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/9/9 || &lt;br /&gt;
* Aodong Li&lt;br /&gt;
||&lt;br /&gt;
*Pointing the Unknown Words [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c7/Pointing_the_Unknown_Words.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/9/18 || &lt;br /&gt;
* Andy Zhang &lt;br /&gt;
*Shiyao Li&lt;br /&gt;
*Aodong Li&lt;br /&gt;
||&lt;br /&gt;
*Large-Scale Information Extraction from Textual Definitions through Deep Syntactic and Semantic Analysis  [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/04/Large-Scale_Information_Extraction_from_Textual_Definitions_through_Deep_Syntactic_and_Semantic_Analysis.pdf pdf]][[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/80/Large-scale_information_extraction_from_textual_definitions_through_deep_syntactic_and_semantic_analysis.pdf slides]]&lt;br /&gt;
*Finding the Middle Ground - A Model for Planning Satisficing Answers [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b3/Finding_the_Middle_Ground_-_A_Model_for_Planning_Satisficing_Answers.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/44/Finding_the_Middle_Ground.pdf slides]]&lt;br /&gt;
*Compressing Neural Language Models by Sparse Word Representations [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b6/Compressing_Neural_Language_Models_by_Sparse_Word_Representations.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/9/30 || &lt;br /&gt;
* Jiyuan Zhang &lt;br /&gt;
*Shiyue Zhang&lt;br /&gt;
||&lt;br /&gt;
*On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/fa/On-line_Active_Reward_Learning_for_Policy_Optimisation_in_Spoken_Dialogue_Systems.pdf pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/On-line_Active_Reward.pdf slides]]&lt;br /&gt;
*Stack-propagation: Improved Representation Learning for Syntax&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bf/Stack-propagation-_Improved_Representation_Learning_for_Syntax.pdf pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5d/Stack_propagation.pdf slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2017/5/18 || &lt;br /&gt;
*Shiyue Zhang&lt;br /&gt;
||&lt;br /&gt;
*Convolutional Sequence to Sequence Learning [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f3/Conv_seq2seq.pptx slides]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/Cnn_seq2seq.pdf pdf]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Reading_table</id>
		<title>Reading table</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Reading_table"/>
				<updated>2017-05-18T02:50:56Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Speaker!! Materials  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/22  ||Zhang Dong Xu|| Why RNN? [[媒体文件:Why_LSTM.pdf|PPT]] [[媒体文件:Learning_Long-Term_Dependencies_with_Gradient_Descent_is_Difficult.pdf|paper 1]],[[媒体文件:LongShortTermMemory.pdf|paper  2]]&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot;| 2014/12/8 || rowspan='3'|Liu Rong || Yu Zhao, Zhiyuan Liu, Maosong Sun. Phrase Type Sensitive Tensor Indexing Model for Semantic Composition. AAAI'15. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2015_tim.pdf pdf]&lt;br /&gt;
|-&lt;br /&gt;
| Yang Liu, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun. Topical Word Embeddings. AAAI'15. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2015_twe.pdf pdf][https://github.com/largelymfs/topical_word_embeddings code]&lt;br /&gt;
|-&lt;br /&gt;
|  Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu. Learning Entity and Relation Embeddings for Knowledge Graph Completion. AAAI'15. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2015_transr.pdf pdf][https://github.com/mrlyk423/relation_extraction code]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2015/07/10 ||Liu Rong|| &lt;br /&gt;
*Context-Dependent Translation Selection Using Convolutional Neural Network [http://arxiv.org/abs/1503.02357]&lt;br /&gt;
*Syntax-based Deep Matching of Short Texts [http://arxiv.org/abs/1503.02427]&lt;br /&gt;
*Convolutional Neural Network Architectures for Matching Natural Language Sentences[http://www.hangli-hl.com/uploads/3/1/6/8/3168008/hu-etal-nips2014.pdf]&lt;br /&gt;
*LSTM: A Search Space Odyssey [http://arxiv.org/pdf/1503.04069.pdf]&lt;br /&gt;
*A Deep Embedding Model for Co-occurrence Learning  [http://arxiv.org/abs/1504.02824]&lt;br /&gt;
*Text segmentation based on semantic word embeddings[http://arxiv.org/abs/1503.05543]&lt;br /&gt;
*semantic parsing via paraphrashings[http://www.cs.tau.ac.il/research/jonathan.berant/homepage_files/publications/ACL14.pdf]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/07/22 ||Dong Wang|| &lt;br /&gt;
*From Word Embeddings To Document Distances [http://jmlr.org/proceedings/papers/v37/kusnerb15.pdf pdf]&lt;br /&gt;
*[[Asr-read-icml|Reading list for ICML2015]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/07/29 ||Xiaoxi Wang|| &lt;br /&gt;
* Sequence to Sequence Learning with Neural Networks [http://papers.nips.cc/paper/5346-information-based-learning-by-agents-in-unbounded-state-spaces pdf]&lt;br /&gt;
* Neural Machine Translation by Jointly Learning to Align and Translate [http://arxiv.org/abs/1409.0473 pdf]&lt;br /&gt;
|-&lt;br /&gt;
|2015/08/05 ||Tianyi Luo|| &lt;br /&gt;
* A Hierarchical Knowledge Representation for Expert Finding on Social Media(ACL 2015 short paper) [[http://aclanthology.info/papers/a-hierarchical-knowledge-representation-for-expert-finding-on-social-media pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/08/05 ||Dongxu Zhang||&lt;br /&gt;
* Describing Multimedia Content using Attention-based Encoder-Decoder Networks[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e0/Describing_Multimedia_Content_using_Attention-based_Encoder-Decoder_Networks.pdf]&lt;br /&gt;
* Attention-Based Models for Speech Recognition[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/58/Attention-Based_Models_for_Speech_Recognition.pdf] details in speech recognition.&lt;br /&gt;
* Neural Machine Translation by Joint Learning to Align and Translate[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c3/Neural_Machine_Translation_by_Joint_Learning_to_Align_and_Translate.pdf] details in machine translation.&lt;br /&gt;
|-&lt;br /&gt;
|2015/08/07 ||Chao Xing|| &lt;br /&gt;
* Neural Word Embedding as Implicit Matrix Factorization [[http://papers.nips.cc/paper/5477-neural-word-embedding-as-implicit-matrix-factorization.pdf pdf]]&lt;br /&gt;
* Matrix factorization techniques for recommender systems [[http://www.columbia.edu/~jwp2128/Teaching/W4721/papers/ieeecomputer.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/10/14 ||Tianyi Luo, Dongxu Zhang, Chao Xing|| &lt;br /&gt;
* MEMORY NETWORKS(ICLR 2015) [[http://arxiv.org/pdf/1410.3916v10.pdf pdf]]&lt;br /&gt;
* End-To-End Memory Networks(NIPS 2015) [[http://arxiv.org/pdf/1503.08895v4.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/10/20 ||Tianyi Luo, Xiaoxi Wang|| &lt;br /&gt;
* The Kendall and Mallows Kernels for Permutations (ICML 2015) [[http://jmlr.csail.mit.edu/proceedings/papers/v37/jiao15.pdf pdf]]&lt;br /&gt;
* The ordering of expression among a few genes can provide simple cancer biomarkers and signal BRCA1 mutations (BMC Bioinformatics) [[http://www.biomedcentral.com/content/pdf/1471-2105-10-256.pdf pdf]]&lt;br /&gt;
* Reasoning about Entailment with Neural Attention [[http://arxiv.org/pdf/1509.06664v1.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/10/28 ||Lantian Li|| &lt;br /&gt;
* Binary Code Ranking with Weighted Hamming Distance [[http://www.cv-foundation.org/openaccess/content_cvpr_2013/papers/Zhang_Binary_Code_Ranking_2013_CVPR_paper.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/05 || Chao Xing, Xiaoxi Wang||&lt;br /&gt;
* Generative Image Modeling Using Spatial LSTMs [[http://arxiv.org/pdf/1506.03478v2.pdf pdf]]&lt;br /&gt;
* Character-level Convolutional Networks for Text Classification [[http://arxiv.org/pdf/1509.01626.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/20 || qixinWang||&lt;br /&gt;
* Are You Talking to a Machine? [[http://arxiv.org/pdf/1505.05612v3.pdf pdf]]&lt;br /&gt;
* m-RNN [[http://arxiv.org/pdf/1412.6632v5.pdf pdf]]&lt;br /&gt;
* PresentationPPT [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/93/PresentationPaper--QixinWang20151120.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/27 || Xiaoxi Wang||&lt;br /&gt;
* NEURAL PROGRAMMER-INTERPRETERS [[http://arxiv.org/pdf/1511.06279v2.pdf pdf]]&lt;br /&gt;
* Subset Selection by Pareto Optimization [[http://www.researchgate.net/profile/Yang_Yu87/publication/282632653_Subset_Selection_by_Pareto_Optimization/links/561495d908aed47facee68b5.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/27 || Chao Xing ||&lt;br /&gt;
*Random Walks and Neural Network Language Models [[http://www.aclweb.org/anthology/N15-1165 pdf]]&lt;br /&gt;
*SENSEMBED: Learning Sense Embeddings forWord and Relational Similarity[[http://wwwusers.di.uniroma1.it/~navigli/pubs/ACL_2015_Iacobaccietal.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/4 || Dongxu Zhang, Qixin Wang, Chao Xing ||&lt;br /&gt;
*Building a shared world: Mapping distributional to model-theoretic semantic spaces[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/da/Building_a_shared_world.pdf pdf]]&lt;br /&gt;
*Playing Atari with Deep Reinforcement Learning[[http://arxiv.org/pdf/1312.5602v1.pdf pdf]]&lt;br /&gt;
*Word Embedding Revisited A New Representation Learning and Explicit Matrix Factorization Perspective [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/Report-1.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/11 || Chao Xing, Yiqiao Pan ||&lt;br /&gt;
*Semi-Supervised Word Sense Disambiguation Using Word Embeddings in General and Specific Domains [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/57/Report-12-11-02.pdf pdf]]&lt;br /&gt;
*SENSE2VEC - A FAST AND ACCURATE METHOD FOR WORD SENSE DISAMBIGUATION IN NEURAL WORD EMBEDDINGS [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f0/Report-12-11-03.pdf pdf]]&lt;br /&gt;
*Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space[[http://arxiv.org/pdf/1504.06654v1.pdf pdf]]&lt;br /&gt;
*Distributional Semantics in Use[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/97/Report-12-11-01.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/18 || Tianyi Luo, Dongxu Zhang ||&lt;br /&gt;
*Human-level concept learning through probabilistic program induction(Cognitive Science) [[http://cdn1.almosthuman.cn/wp-content/uploads/2015/12/Human-level-concept-learning-through-probabilistic-program-induction.pdf pdf]]&lt;br /&gt;
*Cluster Analysis of Heterogeneous Rank Data(ICML 2007) [[http://machinelearning.wustl.edu/mlpapers/paper_files/icml2007_BusseOB07.pdf pdf]]&lt;br /&gt;
*Building a shared world: Mapping distributional to model-theoretic semantic spaces[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/da/Building_a_shared_world.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/25 || Dongxu zhang, Qixin Wang ||&lt;br /&gt;
*Exploiting Multiple Sources for Open-domain Hypernym Discovery[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/d1/Exploiting_Multiple_Sources_for_Open-domain_Hypernym_Discovery.pdf]]&lt;br /&gt;
*learning semantic hierarchies via word embeddings[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/4f/Learning_semantic_hierarchies_via_word_embeddings_acl2014.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/31 || Xiaoxi Wang， Chao Xing||&lt;br /&gt;
* Multilingual Language Processing From Bytes [[http://arxiv.org/pdf/1512.00103v1.pdf pdf]]&lt;br /&gt;
* Towards universal neural nets: Gibbs machines and ACE. [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3b/Report-12-31.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/8 || Qixin Wang， Tianyi Luo||&lt;br /&gt;
*Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/Unveiling_the_Dreams_of_Word_Embeddings-_Towards_Language-Driven_Image_Generation.pdf pdf]]&lt;br /&gt;
*Generating Chinese Couplets using a Statistical MT Approach[[http://aclweb.org/anthology/C/C08/C08-1048.pdf pdf]]&lt;br /&gt;
*Generating Chinese Classical Poems with Statistical Machine Translation Models[[http://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/viewFile/4753/5314 pdf]]&lt;br /&gt;
*Chinese Poetry Generation with Recurrent Neural Networks[[http://www.aclweb.org/old_anthology/D/D14/D14-1074.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/15 || Chao Xing||&lt;br /&gt;
*Learning from Chris Dyer [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/cd/Learning_From_Chris_Dyer.pptx ppt]]&lt;br /&gt;
*Learning Word Representations with Hierarchical Sparse Coding [[http://arxiv.org/pdf/1406.2035v2.pdf pdf]]&lt;br /&gt;
*Non-distributional Word Vector Representations [[http://www.cs.cmu.edu/~mfaruqui/papers/acl15-nondist.pdf pdf]]&lt;br /&gt;
*Sparse Overcomplete Word Vector Representations [[http://www.cs.cmu.edu/~mfaruqui/papers/acl15-overcomplete.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/22 || Qixin Wang, Tianyi Luo||&lt;br /&gt;
*Skip_thought_vector [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/aa/Skip_thought_vector.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/29 || Dongxu Zhang||&lt;br /&gt;
*Towards Neural Network-based Reasoning[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/cb/Towards_Neural_Network-based_Reasoning.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/3/25 || Jiyuan Zhang||&lt;br /&gt;
*Modeling Temporal Dependencies in High-Dimensional Sequences:Application to Polyphonic Music Generation and Transcription[[http://www-etud.iro.umontreal.ca/~boulanni/ICML2012.pdf pdf]]&lt;br /&gt;
*Composing Music With Recurrent Neural Networks[[http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/   blog]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/4/1 || Chao Xing||&lt;br /&gt;
*Generating Text with Deep Reinforcement Learning[[http://arxiv.org/pdf/1510.09202v1.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/4/8 || Tianyi Luo||&lt;br /&gt;
*Generating Chinese Classical Poems with RNN[[http://nlp.hivefire.com/articles/share/56982/]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/4/28 || Chao Xing||&lt;br /&gt;
*Knowledge Base Completion via Search-Based Question Answering [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b1/Knowledge_Base_Completion_via_Search-Based_Question_Answering_-_Report.pdf]]&lt;br /&gt;
*Open Domain Question Answering via Semantic Enrichment [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/15/Open_Domain_Question_Answering_via_Semantic_Enrichment_-_Report.pdf]]&lt;br /&gt;
*A Neural Conversational Model [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/15/A_Neural_Conversational_Model_-_Report.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/5/11 || Chao Xing||&lt;br /&gt;
*A Hierarchical Recurrent Encoder-Decoder for Generative Context-Aware Query Suggestion &lt;br /&gt;
*A Neural Network Approach to Context-Sensitive Generation of Conversational Responses&lt;br /&gt;
*Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models&lt;br /&gt;
*Neural Responding Machine for Short-Text Conversation&lt;br /&gt;
*Learning from Real Users Rating Dialogue Success with Neural Networks for Reinforcement Learning in Spoken Dialogue Systems&lt;br /&gt;
|-&lt;br /&gt;
|2016/7/28 || Aiting Liu ||&lt;br /&gt;
*Intrinsic Subspace Evaluation of Word Embedding Representations    [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/68/Intrinsic_Subspace_Evaluation_of_Word_Embedding_Representations.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/17/Intrinsic_Subspace_Evaluation_of_Word_Embedding_Representations.pptx slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/4 || &lt;br /&gt;
* Aodong Li&lt;br /&gt;
* Jiyuan Zhang&lt;br /&gt;
* Andi Zhang &lt;br /&gt;
||&lt;br /&gt;
*On the Role of Seed Lexicons in Learning Bilingual Word Embeddings   [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5b/On_the_Role_of_Seed_Lexicons_in_Learning_Bilingual_Word_Embeddings.pdf pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/51/2.0On_the_Role_of_Seed_Lexicons_in_Learning_Bilingual_Word_Embeddings_.pdf slides]]&lt;br /&gt;
*ABCNN- Attention-Based Convolutional Neural Network for Modeling Sentence Pairs &lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/76/ABCNN-_Attention-Based_Convolutional_Neural_Network_for_Modeling_Sentence_Pairs_.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/0d/Attention-Based_Convolutianal_Neural_Network_for_Modeling_Sentence_Pairs.pptx slides]]&lt;br /&gt;
*[[Tutorial]]: Introduction to different LMs: NNLM, RNNLM, continuous bag of words, skip-gram&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/da/Models_for_computing_continuous_vector_representations_of_words.pdf slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/18 || &lt;br /&gt;
* Shiyao Li &lt;br /&gt;
* Aiting Liu &lt;br /&gt;
||&lt;br /&gt;
*Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a4/Multilingual_part-of-speech_tagging_with_bidirectional_long_short-term_memory_models_and_auxiliary_loss.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Multilingual_Part-of-Speech_Tagging_with.pdf slides]]&lt;br /&gt;
*A Sentence Interaction Network for Modeling Dependence between Sentences&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/04/A_Sentence_Interaction_Network_for_Modeling_Dependence_between_Sentences.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6c/A_Sentence_Interaction_Network_for_Modeling_Dependence_between_Sentences.pptx slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/25 || &lt;br /&gt;
* Ziwei Bai &lt;br /&gt;
||&lt;br /&gt;
*[[Tutorial]]: Tensorflow guidelines and some examples&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/32/Tensor_flow_bai.pdf slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/26 || &lt;br /&gt;
* Jiyuan Zhang&lt;br /&gt;
* Shiyao Li &lt;br /&gt;
||&lt;br /&gt;
*[[Tutorial]]: Introduction to GRU, LSTM, RBM [[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:An_overview_of_LSTM,GRU,RBM.pptx slides]]&lt;br /&gt;
* [[Tutorial]] : Linear Algebra, Probability and Information Basics&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f1/LinearAlgebra.pdf LinearAlgebra]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/97/ProbilityTheoryandInformationTheory.pdf ProbabilityAndInformationTheory]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/9/9 || &lt;br /&gt;
* Aodong Li&lt;br /&gt;
||&lt;br /&gt;
*Pointing the Unknown Words [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c7/Pointing_the_Unknown_Words.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/9/18 || &lt;br /&gt;
* Andy Zhang &lt;br /&gt;
*Shiyao Li&lt;br /&gt;
*Aodong Li&lt;br /&gt;
||&lt;br /&gt;
*Large-Scale Information Extraction from Textual Definitions through Deep Syntactic and Semantic Analysis  [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/04/Large-Scale_Information_Extraction_from_Textual_Definitions_through_Deep_Syntactic_and_Semantic_Analysis.pdf pdf]][[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/80/Large-scale_information_extraction_from_textual_definitions_through_deep_syntactic_and_semantic_analysis.pdf slides]]&lt;br /&gt;
*Finding the Middle Ground - A Model for Planning Satisficing Answers [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b3/Finding_the_Middle_Ground_-_A_Model_for_Planning_Satisficing_Answers.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/44/Finding_the_Middle_Ground.pdf slides]]&lt;br /&gt;
*Compressing Neural Language Models by Sparse Word Representations [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b6/Compressing_Neural_Language_Models_by_Sparse_Word_Representations.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/9/30 || &lt;br /&gt;
* Jiyuan Zhang &lt;br /&gt;
*Shiyue Zhang&lt;br /&gt;
||&lt;br /&gt;
*On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/fa/On-line_Active_Reward_Learning_for_Policy_Optimisation_in_Spoken_Dialogue_Systems.pdf pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/On-line_Active_Reward.pdf slides]]&lt;br /&gt;
*Stack-propagation: Improved Representation Learning for Syntax&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bf/Stack-propagation-_Improved_Representation_Learning_for_Syntax.pdf pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5d/Stack_propagation.pdf slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2017/5/18 || &lt;br /&gt;
*Shiyue Zhang&lt;br /&gt;
||&lt;br /&gt;
*Convolutional Sequence to Sequence Learning [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f3/Conv_seq2seq.pptx slides]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Conv_seq2seq.pptx</id>
		<title>文件:Conv seq2seq.pptx</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Conv_seq2seq.pptx"/>
				<updated>2017-05-18T02:49:02Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Cnn_seq2seq.pdf</id>
		<title>文件:Cnn seq2seq.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Cnn_seq2seq.pdf"/>
				<updated>2017-05-18T02:47:50Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Reading_table</id>
		<title>Reading table</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Reading_table"/>
				<updated>2017-05-18T02:46:14Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Speaker!! Materials  &lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot;|2014/10/22  ||Zhang Dong Xu|| Why RNN? [[媒体文件:Why_LSTM.pdf|PPT]] [[媒体文件:Learning_Long-Term_Dependencies_with_Gradient_Descent_is_Difficult.pdf|paper 1]],[[媒体文件:LongShortTermMemory.pdf|paper  2]]&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot;| 2014/12/8 || rowspan='3'|Liu Rong || Yu Zhao, Zhiyuan Liu, Maosong Sun. Phrase Type Sensitive Tensor Indexing Model for Semantic Composition. AAAI'15. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2015_tim.pdf pdf]&lt;br /&gt;
|-&lt;br /&gt;
| Yang Liu, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun. Topical Word Embeddings. AAAI'15. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2015_twe.pdf pdf][https://github.com/largelymfs/topical_word_embeddings code]&lt;br /&gt;
|-&lt;br /&gt;
|  Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu. Learning Entity and Relation Embeddings for Knowledge Graph Completion. AAAI'15. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2015_transr.pdf pdf][https://github.com/mrlyk423/relation_extraction code]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2015/07/10 ||Liu Rong|| &lt;br /&gt;
*Context-Dependent Translation Selection Using Convolutional Neural Network [http://arxiv.org/abs/1503.02357]&lt;br /&gt;
*Syntax-based Deep Matching of Short Texts [http://arxiv.org/abs/1503.02427]&lt;br /&gt;
*Convolutional Neural Network Architectures for Matching Natural Language Sentences[http://www.hangli-hl.com/uploads/3/1/6/8/3168008/hu-etal-nips2014.pdf]&lt;br /&gt;
*LSTM: A Search Space Odyssey [http://arxiv.org/pdf/1503.04069.pdf]&lt;br /&gt;
*A Deep Embedding Model for Co-occurrence Learning  [http://arxiv.org/abs/1504.02824]&lt;br /&gt;
*Text segmentation based on semantic word embeddings[http://arxiv.org/abs/1503.05543]&lt;br /&gt;
*semantic parsing via paraphrashings[http://www.cs.tau.ac.il/research/jonathan.berant/homepage_files/publications/ACL14.pdf]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/07/22 ||Dong Wang|| &lt;br /&gt;
*From Word Embeddings To Document Distances [http://jmlr.org/proceedings/papers/v37/kusnerb15.pdf pdf]&lt;br /&gt;
*[[Asr-read-icml|Reading list for ICML2015]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/07/29 ||Xiaoxi Wang|| &lt;br /&gt;
* Sequence to Sequence Learning with Neural Networks [http://papers.nips.cc/paper/5346-information-based-learning-by-agents-in-unbounded-state-spaces pdf]&lt;br /&gt;
* Neural Machine Translation by Jointly Learning to Align and Translate [http://arxiv.org/abs/1409.0473 pdf]&lt;br /&gt;
|-&lt;br /&gt;
|2015/08/05 ||Tianyi Luo|| &lt;br /&gt;
* A Hierarchical Knowledge Representation for Expert Finding on Social Media(ACL 2015 short paper) [[http://aclanthology.info/papers/a-hierarchical-knowledge-representation-for-expert-finding-on-social-media pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/08/05 ||Dongxu Zhang||&lt;br /&gt;
* Describing Multimedia Content using Attention-based Encoder-Decoder Networks[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e0/Describing_Multimedia_Content_using_Attention-based_Encoder-Decoder_Networks.pdf]&lt;br /&gt;
* Attention-Based Models for Speech Recognition[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/58/Attention-Based_Models_for_Speech_Recognition.pdf] details in speech recognition.&lt;br /&gt;
* Neural Machine Translation by Joint Learning to Align and Translate[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c3/Neural_Machine_Translation_by_Joint_Learning_to_Align_and_Translate.pdf] details in machine translation.&lt;br /&gt;
|-&lt;br /&gt;
|2015/08/07 ||Chao Xing|| &lt;br /&gt;
* Neural Word Embedding as Implicit Matrix Factorization [[http://papers.nips.cc/paper/5477-neural-word-embedding-as-implicit-matrix-factorization.pdf pdf]]&lt;br /&gt;
* Matrix factorization techniques for recommender systems [[http://www.columbia.edu/~jwp2128/Teaching/W4721/papers/ieeecomputer.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/10/14 ||Tianyi Luo, Dongxu Zhang, Chao Xing|| &lt;br /&gt;
* MEMORY NETWORKS(ICLR 2015) [[http://arxiv.org/pdf/1410.3916v10.pdf pdf]]&lt;br /&gt;
* End-To-End Memory Networks(NIPS 2015) [[http://arxiv.org/pdf/1503.08895v4.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/10/20 ||Tianyi Luo, Xiaoxi Wang|| &lt;br /&gt;
* The Kendall and Mallows Kernels for Permutations (ICML 2015) [[http://jmlr.csail.mit.edu/proceedings/papers/v37/jiao15.pdf pdf]]&lt;br /&gt;
* The ordering of expression among a few genes can provide simple cancer biomarkers and signal BRCA1 mutations (BMC Bioinformatics) [[http://www.biomedcentral.com/content/pdf/1471-2105-10-256.pdf pdf]]&lt;br /&gt;
* Reasoning about Entailment with Neural Attention [[http://arxiv.org/pdf/1509.06664v1.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/10/28 ||Lantian Li|| &lt;br /&gt;
* Binary Code Ranking with Weighted Hamming Distance [[http://www.cv-foundation.org/openaccess/content_cvpr_2013/papers/Zhang_Binary_Code_Ranking_2013_CVPR_paper.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/05 || Chao Xing, Xiaoxi Wang||&lt;br /&gt;
* Generative Image Modeling Using Spatial LSTMs [[http://arxiv.org/pdf/1506.03478v2.pdf pdf]]&lt;br /&gt;
* Character-level Convolutional Networks for Text Classification [[http://arxiv.org/pdf/1509.01626.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/20 || qixinWang||&lt;br /&gt;
* Are You Talking to a Machine? [[http://arxiv.org/pdf/1505.05612v3.pdf pdf]]&lt;br /&gt;
* m-RNN [[http://arxiv.org/pdf/1412.6632v5.pdf pdf]]&lt;br /&gt;
* PresentationPPT [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/93/PresentationPaper--QixinWang20151120.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/27 || Xiaoxi Wang||&lt;br /&gt;
* NEURAL PROGRAMMER-INTERPRETERS [[http://arxiv.org/pdf/1511.06279v2.pdf pdf]]&lt;br /&gt;
* Subset Selection by Pareto Optimization [[http://www.researchgate.net/profile/Yang_Yu87/publication/282632653_Subset_Selection_by_Pareto_Optimization/links/561495d908aed47facee68b5.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/11/27 || Chao Xing ||&lt;br /&gt;
*Random Walks and Neural Network Language Models [[http://www.aclweb.org/anthology/N15-1165 pdf]]&lt;br /&gt;
*SENSEMBED: Learning Sense Embeddings forWord and Relational Similarity[[http://wwwusers.di.uniroma1.it/~navigli/pubs/ACL_2015_Iacobaccietal.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/4 || Dongxu Zhang, Qixin Wang, Chao Xing ||&lt;br /&gt;
*Building a shared world: Mapping distributional to model-theoretic semantic spaces[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/da/Building_a_shared_world.pdf pdf]]&lt;br /&gt;
*Playing Atari with Deep Reinforcement Learning[[http://arxiv.org/pdf/1312.5602v1.pdf pdf]]&lt;br /&gt;
*Word Embedding Revisited A New Representation Learning and Explicit Matrix Factorization Perspective [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/45/Report-1.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/11 || Chao Xing, Yiqiao Pan ||&lt;br /&gt;
*Semi-Supervised Word Sense Disambiguation Using Word Embeddings in General and Specific Domains [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/57/Report-12-11-02.pdf pdf]]&lt;br /&gt;
*SENSE2VEC - A FAST AND ACCURATE METHOD FOR WORD SENSE DISAMBIGUATION IN NEURAL WORD EMBEDDINGS [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f0/Report-12-11-03.pdf pdf]]&lt;br /&gt;
*Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space[[http://arxiv.org/pdf/1504.06654v1.pdf pdf]]&lt;br /&gt;
*Distributional Semantics in Use[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/97/Report-12-11-01.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/18 || Tianyi Luo, Dongxu Zhang ||&lt;br /&gt;
*Human-level concept learning through probabilistic program induction(Cognitive Science) [[http://cdn1.almosthuman.cn/wp-content/uploads/2015/12/Human-level-concept-learning-through-probabilistic-program-induction.pdf pdf]]&lt;br /&gt;
*Cluster Analysis of Heterogeneous Rank Data(ICML 2007) [[http://machinelearning.wustl.edu/mlpapers/paper_files/icml2007_BusseOB07.pdf pdf]]&lt;br /&gt;
*Building a shared world: Mapping distributional to model-theoretic semantic spaces[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/da/Building_a_shared_world.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/25 || Dongxu zhang, Qixin Wang ||&lt;br /&gt;
*Exploiting Multiple Sources for Open-domain Hypernym Discovery[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/d1/Exploiting_Multiple_Sources_for_Open-domain_Hypernym_Discovery.pdf]]&lt;br /&gt;
*learning semantic hierarchies via word embeddings[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/4f/Learning_semantic_hierarchies_via_word_embeddings_acl2014.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2015/12/31 || Xiaoxi Wang， Chao Xing||&lt;br /&gt;
* Multilingual Language Processing From Bytes [[http://arxiv.org/pdf/1512.00103v1.pdf pdf]]&lt;br /&gt;
* Towards universal neural nets: Gibbs machines and ACE. [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3b/Report-12-31.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/8 || Qixin Wang， Tianyi Luo||&lt;br /&gt;
*Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/Unveiling_the_Dreams_of_Word_Embeddings-_Towards_Language-Driven_Image_Generation.pdf pdf]]&lt;br /&gt;
*Generating Chinese Couplets using a Statistical MT Approach[[http://aclweb.org/anthology/C/C08/C08-1048.pdf pdf]]&lt;br /&gt;
*Generating Chinese Classical Poems with Statistical Machine Translation Models[[http://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/viewFile/4753/5314 pdf]]&lt;br /&gt;
*Chinese Poetry Generation with Recurrent Neural Networks[[http://www.aclweb.org/old_anthology/D/D14/D14-1074.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/15 || Chao Xing||&lt;br /&gt;
*Learning from Chris Dyer [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/cd/Learning_From_Chris_Dyer.pptx ppt]]&lt;br /&gt;
*Learning Word Representations with Hierarchical Sparse Coding [[http://arxiv.org/pdf/1406.2035v2.pdf pdf]]&lt;br /&gt;
*Non-distributional Word Vector Representations [[http://www.cs.cmu.edu/~mfaruqui/papers/acl15-nondist.pdf pdf]]&lt;br /&gt;
*Sparse Overcomplete Word Vector Representations [[http://www.cs.cmu.edu/~mfaruqui/papers/acl15-overcomplete.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/22 || Qixin Wang, Tianyi Luo||&lt;br /&gt;
*Skip_thought_vector [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/aa/Skip_thought_vector.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/1/29 || Dongxu Zhang||&lt;br /&gt;
*Towards Neural Network-based Reasoning[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/cb/Towards_Neural_Network-based_Reasoning.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/3/25 || Jiyuan Zhang||&lt;br /&gt;
*Modeling Temporal Dependencies in High-Dimensional Sequences:Application to Polyphonic Music Generation and Transcription[[http://www-etud.iro.umontreal.ca/~boulanni/ICML2012.pdf pdf]]&lt;br /&gt;
*Composing Music With Recurrent Neural Networks[[http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/   blog]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/4/1 || Chao Xing||&lt;br /&gt;
*Generating Text with Deep Reinforcement Learning[[http://arxiv.org/pdf/1510.09202v1.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/4/8 || Tianyi Luo||&lt;br /&gt;
*Generating Chinese Classical Poems with RNN[[http://nlp.hivefire.com/articles/share/56982/]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/4/28 || Chao Xing||&lt;br /&gt;
*Knowledge Base Completion via Search-Based Question Answering [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b1/Knowledge_Base_Completion_via_Search-Based_Question_Answering_-_Report.pdf]]&lt;br /&gt;
*Open Domain Question Answering via Semantic Enrichment [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/15/Open_Domain_Question_Answering_via_Semantic_Enrichment_-_Report.pdf]]&lt;br /&gt;
*A Neural Conversational Model [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/15/A_Neural_Conversational_Model_-_Report.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/5/11 || Chao Xing||&lt;br /&gt;
*A Hierarchical Recurrent Encoder-Decoder for Generative Context-Aware Query Suggestion &lt;br /&gt;
*A Neural Network Approach to Context-Sensitive Generation of Conversational Responses&lt;br /&gt;
*Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models&lt;br /&gt;
*Neural Responding Machine for Short-Text Conversation&lt;br /&gt;
*Learning from Real Users Rating Dialogue Success with Neural Networks for Reinforcement Learning in Spoken Dialogue Systems&lt;br /&gt;
|-&lt;br /&gt;
|2016/7/28 || Aiting Liu ||&lt;br /&gt;
*Intrinsic Subspace Evaluation of Word Embedding Representations    [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/68/Intrinsic_Subspace_Evaluation_of_Word_Embedding_Representations.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/1/17/Intrinsic_Subspace_Evaluation_of_Word_Embedding_Representations.pptx slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/4 || &lt;br /&gt;
* Aodong Li&lt;br /&gt;
* Jiyuan Zhang&lt;br /&gt;
* Andi Zhang &lt;br /&gt;
||&lt;br /&gt;
*On the Role of Seed Lexicons in Learning Bilingual Word Embeddings   [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5b/On_the_Role_of_Seed_Lexicons_in_Learning_Bilingual_Word_Embeddings.pdf pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/51/2.0On_the_Role_of_Seed_Lexicons_in_Learning_Bilingual_Word_Embeddings_.pdf slides]]&lt;br /&gt;
*ABCNN- Attention-Based Convolutional Neural Network for Modeling Sentence Pairs &lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/76/ABCNN-_Attention-Based_Convolutional_Neural_Network_for_Modeling_Sentence_Pairs_.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/0d/Attention-Based_Convolutianal_Neural_Network_for_Modeling_Sentence_Pairs.pptx slides]]&lt;br /&gt;
*[[Tutorial]]: Introduction to different LMs: NNLM, RNNLM, continuous bag of words, skip-gram&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/d/da/Models_for_computing_continuous_vector_representations_of_words.pdf slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/18 || &lt;br /&gt;
* Shiyao Li &lt;br /&gt;
* Aiting Liu &lt;br /&gt;
||&lt;br /&gt;
*Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a4/Multilingual_part-of-speech_tagging_with_bidirectional_long_short-term_memory_models_and_auxiliary_loss.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Multilingual_Part-of-Speech_Tagging_with.pdf slides]]&lt;br /&gt;
*A Sentence Interaction Network for Modeling Dependence between Sentences&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/04/A_Sentence_Interaction_Network_for_Modeling_Dependence_between_Sentences.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6c/A_Sentence_Interaction_Network_for_Modeling_Dependence_between_Sentences.pptx slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/25 || &lt;br /&gt;
* Ziwei Bai &lt;br /&gt;
||&lt;br /&gt;
*[[Tutorial]]: Tensorflow guidelines and some examples&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/32/Tensor_flow_bai.pdf slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/8/26 || &lt;br /&gt;
* Jiyuan Zhang&lt;br /&gt;
* Shiyao Li &lt;br /&gt;
||&lt;br /&gt;
*[[Tutorial]]: Introduction to GRU, LSTM, RBM [[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:An_overview_of_LSTM,GRU,RBM.pptx slides]]&lt;br /&gt;
* [[Tutorial]] : Linear Algebra, Probability and Information Basics&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f1/LinearAlgebra.pdf LinearAlgebra]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/97/ProbilityTheoryandInformationTheory.pdf ProbabilityAndInformationTheory]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/9/9 || &lt;br /&gt;
* Aodong Li&lt;br /&gt;
||&lt;br /&gt;
*Pointing the Unknown Words [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c7/Pointing_the_Unknown_Words.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/9/18 || &lt;br /&gt;
* Andy Zhang &lt;br /&gt;
*Shiyao Li&lt;br /&gt;
*Aodong Li&lt;br /&gt;
||&lt;br /&gt;
*Large-Scale Information Extraction from Textual Definitions through Deep Syntactic and Semantic Analysis  [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/04/Large-Scale_Information_Extraction_from_Textual_Definitions_through_Deep_Syntactic_and_Semantic_Analysis.pdf pdf]][[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/80/Large-scale_information_extraction_from_textual_definitions_through_deep_syntactic_and_semantic_analysis.pdf slides]]&lt;br /&gt;
*Finding the Middle Ground - A Model for Planning Satisficing Answers [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b3/Finding_the_Middle_Ground_-_A_Model_for_Planning_Satisficing_Answers.pdf pdf]]&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/44/Finding_the_Middle_Ground.pdf slides]]&lt;br /&gt;
*Compressing Neural Language Models by Sparse Word Representations [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b6/Compressing_Neural_Language_Models_by_Sparse_Word_Representations.pdf pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|2016/9/30 || &lt;br /&gt;
* Jiyuan Zhang &lt;br /&gt;
*Shiyue Zhang&lt;br /&gt;
||&lt;br /&gt;
*On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/fa/On-line_Active_Reward_Learning_for_Policy_Optimisation_in_Spoken_Dialogue_Systems.pdf pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bb/On-line_Active_Reward.pdf slides]]&lt;br /&gt;
*Stack-propagation: Improved Representation Learning for Syntax&lt;br /&gt;
[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bf/Stack-propagation-_Improved_Representation_Learning_for_Syntax.pdf pdf]] [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5d/Stack_propagation.pdf slides]]&lt;br /&gt;
|-&lt;br /&gt;
|2017/5/18 || &lt;br /&gt;
*Shiyue Zhang&lt;br /&gt;
||&lt;br /&gt;
*Convolutional Sequence to Sequence Learning&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-15</id>
		<title>NLP Status Report 2017-5-15</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-15"/>
				<updated>2017-05-15T04:37:49Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/4/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* got result of M-NMT 28.92 ( +2.2, baseline=26.73) &lt;br /&gt;
* trained word2vec on big zh,uy data&lt;br /&gt;
* tested NMT baseline when there is UNK in ref, bleu=34.10 (better than MOSES=33.10), which means UNK is the biggest problem in NMT&lt;br /&gt;
* found a problem in dataset, some sentences are reversed &lt;br /&gt;
||&lt;br /&gt;
* test embedding untrained model&lt;br /&gt;
* refine embedding untrained model&lt;br /&gt;
* fix reversed sentences problem and rerun on MOSES, NMT, M-NMT&lt;br /&gt;
* implement UNK model &lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
* configured environment and ran tf_translate code &lt;br /&gt;
* read machine translation paper &lt;br /&gt;
* learned lstm model and seq2seq model&lt;br /&gt;
||&lt;br /&gt;
* learn the implement of seq2seq model&lt;br /&gt;
* read tf_translate code &lt;br /&gt;
* understand the meaning of main code&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-15</id>
		<title>NLP Status Report 2017-5-15</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-15"/>
				<updated>2017-05-15T01:38:00Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/4/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* got result of M-NMT 28.92 ( +2.2, baseline=26.73) &lt;br /&gt;
* trained word2vec on big zh,uy data&lt;br /&gt;
* tested NMT baseline when there is UNK in ref, bleu=34.10 (better than MOSES=33.10), which means UNK is the biggest problem in NMT&lt;br /&gt;
* found a problem in dataset, some sentences are reversed &lt;br /&gt;
||&lt;br /&gt;
* test embedding untrained model&lt;br /&gt;
* fix reversed sentences problem and rerun on MOSES, NMT, M-NMT&lt;br /&gt;
* implement UNK model &lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-15</id>
		<title>NLP Status Report 2017-5-15</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-5-15"/>
				<updated>2017-05-15T01:30:45Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：以“{| class=&amp;quot;wikitable&amp;quot; !Date !! People !! Last Week !! This Week |- | rowspan=&amp;quot;6&amp;quot;|2017/4/5 |Jiyuan Zhang || ||  |- |Aodong LI ||  || |- |Shiyue Zhang ||  || |- |Shipan...”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/4/5&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Aodong LI ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shipan Ren ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Bi-monthly-2017-04-language</id>
		<title>Bi-monthly-2017-04-language</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Bi-monthly-2017-04-language"/>
				<updated>2017-05-09T04:46:01Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Team [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a8/Nlp_team_bi_monthly_report.pdf]&lt;br /&gt;
&lt;br /&gt;
Jiyuan Zhang [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e4/Bi-monthly_report_zhangjy.pdf]&lt;br /&gt;
&lt;br /&gt;
Shiyue Zhang [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/68/Bi-monthly_report_17.01-04.pdf]&lt;br /&gt;
&lt;br /&gt;
Aodong Li&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Bi-monthly_report_17.01-04.pdf</id>
		<title>文件:Bi-monthly report 17.01-04.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Bi-monthly_report_17.01-04.pdf"/>
				<updated>2017-05-09T04:43:47Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-4-17</id>
		<title>NLP Status Report 2017-4-17</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-4-17"/>
				<updated>2017-05-03T02:55:00Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/4/5&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
*run the ppg model using different datasets&lt;br /&gt;
*check the emnlp paper&lt;br /&gt;
|| &lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* finished emnlp paper deadline&lt;br /&gt;
||&lt;br /&gt;
* on leave&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-4-10</id>
		<title>NLP Status Report 2017-4-10</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-4-10"/>
				<updated>2017-05-03T02:52:28Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/4/5&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
* Got the sampled 100w good data and ran Moses (BLEU: 30.6)&lt;br /&gt;
* Reimplemented the idea of ACL (added some optimization to the previous code) and check the performance in the following gradual steps: 1. use s_i-1 as memory query; 2. use s_i-1+c_i as memory query; 3. use y as the memory states for attention; 4. use y + smt_attentions * h as memory states for attention.&lt;br /&gt;
* ran experiments for the above steps but the loss was inf. I am looking for reasons.&lt;br /&gt;
||&lt;br /&gt;
*do experiments and write the paper&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
*convert the style of the paper to EMNLP&lt;br /&gt;
*contact the ppg's author to get the code&lt;br /&gt;
|| &lt;br /&gt;
*improve the effect of the qx's model&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
*revise the original oov model so that it can automatically detect oov words and translate them &lt;br /&gt;
*deal with the situation that source word is oov but target word is not oov first&lt;br /&gt;
*it didn't predict right&lt;br /&gt;
||&lt;br /&gt;
*make the model work as what we wanted&lt;br /&gt;
*deal with the situation that source word is oov and target word is also oov, then other situations&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* working on a paper for EMNLP &lt;br /&gt;
||&lt;br /&gt;
* working on a paper for EMNLP &lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-4-5</id>
		<title>NLP Status Report 2017-4-5</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-4-5"/>
				<updated>2017-04-05T02:02:27Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/3/27&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
*tested for the baseline but cannot get the reasonable result.&lt;br /&gt;
*debug the baseline to try to reproduce the good result but failed.&lt;br /&gt;
*fixed the problem of nan in alpha-gamma method but the result is not good.&lt;br /&gt;
*changed the calculation of probability for alpha-gamma method but the result is neither good.&lt;br /&gt;
*ran Moses for cwmt zh-en translation, but the training data is case-sensitive, so need to rerun.&lt;br /&gt;
||&lt;br /&gt;
*rerun Moses for cwmt zh-en and cs-en&lt;br /&gt;
*decide to use tensorflow or theano&lt;br /&gt;
*run experiments based on the chosen platform&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
* I did nothing (I found my ACL mark was at borderline, so I didn't have a mind to work)&lt;br /&gt;
|| &lt;br /&gt;
*improve the effect of the qx's model&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
*fixed the bug, turns out it rises from the unfamiliarity with numpy.resize() function&lt;br /&gt;
*the demo model can deal with oov problem(both source word and target word are oov)&lt;br /&gt;
||&lt;br /&gt;
*some paper work about graduation design&lt;br /&gt;
*run some experiments using theano on old data set and new zh2en from lihang&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* got a reasonable baseline on big zhen data&lt;br /&gt;
||&lt;br /&gt;
* implement mem model on this baseline, and test on big data&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-4-5</id>
		<title>NLP Status Report 2017-4-5</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-4-5"/>
				<updated>2017-04-05T02:01:54Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/3/27&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
*tested for the baseline but cannot get the reasonable result.&lt;br /&gt;
*debug the baseline to try to reproduce the good result but failed.&lt;br /&gt;
*fixed the problem of nan in alpha-gamma method but the result is not good.&lt;br /&gt;
*changed the calculation of probability for alpha-gamma method but the result is neither good.&lt;br /&gt;
*ran Moses for cwmt zh-en translation, but the training data is case-sensitive, so need to rerun.&lt;br /&gt;
||&lt;br /&gt;
*rerun Moses for cwmt zh-en and cs-en&lt;br /&gt;
*decide to use tensorflow or theano&lt;br /&gt;
*run experiments based on the chosen platform&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
* I did nothing (I found my ACL mark was at borderline, so I didn't have a mind to work)&lt;br /&gt;
|| &lt;br /&gt;
*improve the effect of the qx's model&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
*fixed the bug, turns out it rises from the unfamiliarity with numpy.resize() function&lt;br /&gt;
*the demo model can deal with oov problem(both source word and target word are oov)&lt;br /&gt;
||&lt;br /&gt;
*some paper work about graduation design&lt;br /&gt;
*run some experiments using theano on old data set and new zh2en from lihang&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* got a reasonable baseline on big zhen data&lt;br /&gt;
||&lt;br /&gt;
* implement mem model on this baseline, test on big data&lt;br /&gt;
* implement &lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-3-27</id>
		<title>NLP Status Report 2017-3-27</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-3-27"/>
				<updated>2017-04-05T00:41:40Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/3/27&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
*tested for the baseline but cannot get the reasonable result.&lt;br /&gt;
*debug the baseline to try to reproduce the good result but failed.&lt;br /&gt;
*fixed the problem of nan in alpha-gamma method but the result is not good.&lt;br /&gt;
*changed the calculation of probability for alpha-gamma method but the result is neither good.&lt;br /&gt;
*ran Moses for cwmt zh-en translation, but the training data is case-sensitive, so need to rerun.&lt;br /&gt;
||&lt;br /&gt;
*rerun Moses for cwmt zh-en and cs-en&lt;br /&gt;
*decide to use tensorflow or theano&lt;br /&gt;
*run experiments based on the chosen platform&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* got good results from gnmt&lt;br /&gt;
* but haven't found the way to implement our model on gnmt&lt;br /&gt;
* trying to modify our code to make it work on big data&lt;br /&gt;
||&lt;br /&gt;
* go on trying to modify our code to make it work on big data&lt;br /&gt;
* go on looking into gnmt code&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-3-20</id>
		<title>NLP Status Report 2017-3-20</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-3-20"/>
				<updated>2017-04-05T00:35:05Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/3/20&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
*went through the code and made different attempt, and managed to produce the good result (47--&amp;gt;50 on the small zh-en data set)&lt;br /&gt;
*wrote the cross-entropy method for alpha-gamma method, but found it is different from the built-in method.&lt;br /&gt;
*changed to use the build-in soft-cross-entropy method and ran experiments&lt;br /&gt;
||&lt;br /&gt;
*get the result for alpha-gamma method&lt;br /&gt;
*run experiments on the big data&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* learned to use google's new seq2seq code(gnmt)&lt;br /&gt;
* ran gnmt on en-de , small zh-en, cs-en, big zh-en&lt;br /&gt;
||&lt;br /&gt;
* run gnmt on new big zh-en&lt;br /&gt;
* try to find how to implement our model on gnmt&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-3-13</id>
		<title>NLP Status Report 2017-3-13</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-3-13"/>
				<updated>2017-03-13T08:01:36Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/1/3&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
* tested and analyzed the results on the cs-en data set (30.4 on the heldout-training set and 7.3 on the dev set);&lt;br /&gt;
* added masks to the baseline (44.4 on the cn-en);&lt;br /&gt;
* added encoder-masks and memory-masks to alpha-gamma method and fixed the bugs. Got an improvement of 0.5 again the masked baseline [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b8/Nmt_mn_report_continue.pdf report]];&lt;br /&gt;
* To avoid doing softmax twice, rewrite the softmax_cross_entropy function myself. (under-training)&lt;br /&gt;
||&lt;br /&gt;
* analyze and improve the alpha-gamma method.&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
*completed to reproduce planning neural network&lt;br /&gt;
*chose best attention_memory model for huilian  and ran big train dataset(about 370k) [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b9/Model_with_different_dataset.pdf  result]&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
*Keyword expansion model&lt;br /&gt;
*collect more poem from Internet&lt;br /&gt;
*recruiting&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* added trained memory-attention model to neural model(43.0) and got 2+ blue gain (45.19), but need more validation and improvement&lt;br /&gt;
* ran baseline model on cs-en data, and found it was good on train set but poor on test set.&lt;br /&gt;
* ran baseline model on en-fr data, and found 'inf' problem.&lt;br /&gt;
* fixed the 'inf' problem by debugging the code of mask-added baseline model.&lt;br /&gt;
* running on cs-en and en-fr data again.&lt;br /&gt;
||&lt;br /&gt;
* go on with baseline on big data: get results of cs-en and enfr data, train on zh-en data from [http://www.statmt.org/wmt17/translation-task.html#download WMT17]&lt;br /&gt;
* go on to refine memory attention model: retrain to find out if the 2+ is just by chance, try more memory attention structure (relu, a(t-1), y(t-1)...)&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2017-3-6</id>
		<title>2017-3-6</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2017-3-6"/>
				<updated>2017-03-06T03:03:26Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/1/3&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
* added source masks in attention_decoder where calculates attention and in gru_cell where calculates new states.&lt;br /&gt;
* found the attribute sentence_length, perhaps it works better than my code&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* figured out the problem of attention: the initial value of V should be around 0&lt;br /&gt;
* tested different modification, such as add mask, init b with 0. &lt;br /&gt;
* Compared the results, and concluded only change the initial value of V is the best.&lt;br /&gt;
||&lt;br /&gt;
* try to get right attention on memory&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2017-3-6</id>
		<title>2017-3-6</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2017-3-6"/>
				<updated>2017-03-06T03:02:45Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：以“{| class=&amp;quot;wikitable&amp;quot; !Date !! People !! Last Week !! This Week |- | rowspan=&amp;quot;6&amp;quot;|2017/1/3 |Yang Feng || || |- |Jiyuan Zhang ||  ||   |- |Andi Zhang ||  ||  |- |Shiyue...”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/1/3&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* figured out the problem of attention: the initial value of V should be around 0&lt;br /&gt;
* tested different modification, such as add mask, init b with 0. Compared the results, and concluded only change the initial value of V is the best.&lt;br /&gt;
||&lt;br /&gt;
* try to get right attention on memory&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Status_report</id>
		<title>Status report</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Status_report"/>
				<updated>2017-03-06T03:01:24Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[2017-3-6]]&lt;br /&gt;
&lt;br /&gt;
[[2017-2-27]]&lt;br /&gt;
&lt;br /&gt;
[[2017-2-20]]&lt;br /&gt;
&lt;br /&gt;
[[2017-2-13]]&lt;br /&gt;
&lt;br /&gt;
[[2017-2-6]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-30]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-23]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-16]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-10]]&lt;br /&gt;
&lt;br /&gt;
[[2017-1-3]]&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-2-20</id>
		<title>NLP Status Report 2017-2-20</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-2-20"/>
				<updated>2017-03-06T02:56:28Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/1/3&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
* prepare for IJCAI.&lt;br /&gt;
||&lt;br /&gt;
* improve baseline.&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* prepare for IJCAI.&lt;br /&gt;
||&lt;br /&gt;
* try to figure out the attention problem&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-2-13</id>
		<title>NLP Status Report 2017-2-13</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/NLP_Status_Report_2017-2-13"/>
				<updated>2017-03-06T01:13:15Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date !! People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot;|2017/1/3&lt;br /&gt;
|Yang Feng ||&lt;br /&gt;
* prepare for ACL and IJCAI.&lt;br /&gt;
||&lt;br /&gt;
*prepare for IJCAI&lt;br /&gt;
|-&lt;br /&gt;
|Jiyuan Zhang ||&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Andi Zhang ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || &lt;br /&gt;
* prepare for ACL and IJCAI.&lt;br /&gt;
||&lt;br /&gt;
*prepare for IJCAI&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao ||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2017-03-02T00:29:48Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：/* Daily Report */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=NLP Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
&lt;br /&gt;
* Yang Feng (冯洋)&lt;br /&gt;
* Jiyuan Zhang （张记袁）&lt;br /&gt;
* Aodong Li (李傲冬)&lt;br /&gt;
* Andi Zhang (张安迪)&lt;br /&gt;
* Shiyue Zhang (张诗悦)&lt;br /&gt;
* Li Gu (古丽)&lt;br /&gt;
* Peilun Xiao (肖培伦)&lt;br /&gt;
&lt;br /&gt;
===Former Members===&lt;br /&gt;
* '''Chao Xing (邢超)'''     :  FreeNeb&lt;br /&gt;
* '''Rong Liu (刘荣)'''      :  优酷&lt;br /&gt;
* '''Xiaoxi Wang (王晓曦)''' :  图灵机器人&lt;br /&gt;
* '''Xi Ma (马习)'''         :  清华大学研究生&lt;br /&gt;
* '''Tianyi Luo (骆天一)'''  ： phd candidate in University of California Santa Cruz&lt;br /&gt;
* '''Qixin Wang (王琪鑫)'''  :  MA candidate in University of California&lt;br /&gt;
* '''DongXu Zhang (张东旭)''': --&lt;br /&gt;
* '''Yiqiao Pan (潘一桥)'''  ： MA candidate in University of Sydney &lt;br /&gt;
* '''Shiyao Li （李诗瑶）''' :  BUPT&lt;br /&gt;
* '''Aiting Liu (刘艾婷)'''  :  BUPT&lt;br /&gt;
&lt;br /&gt;
==Work Progress==&lt;br /&gt;
===Daily Report===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Person  !! start!! leave !! hours ||status&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/5&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for ACL paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/6&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for ACL paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/7&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for ACL paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/8&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||11:30 || 20:00|| 6+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/9&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/10&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/11&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||14:30 || 20:00|| 5+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/13&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/14&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/15&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/16&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/17&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/18&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/19&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/21&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* found the reason of loss not go down&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/22&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:00 || 20:00|| 9+||&lt;br /&gt;
* use cos to computer alignments, and loss can go down&lt;br /&gt;
* replace original attention with cos attention, partly and fully train&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot;|2017/2/27&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || 9:30  || 19:00|| 8+ ||&lt;br /&gt;
* find the tanh linear problem&lt;br /&gt;
* use 20*cos replace tanh, get 44.8 blue&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot;|2017/2/28&lt;br /&gt;
|Andy Zhang|| 12:00||19:00 || 7|| &lt;br /&gt;
*read theano code of NMT, try to find out how it generated encoder input masks&lt;br /&gt;
*did some coding of input masks on baseline_beam, need to be further tested&lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang || 9:30 || 19:00 || 8 +||&lt;br /&gt;
* add encoding and attention mask &lt;br /&gt;
* try to find the problem of mem attention&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Time Off Table===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Yang Feng !! Jiyuan Zhang &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Past progress==&lt;br /&gt;
[[nlp-progress 2017/01]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/12]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/11]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/10]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/09]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/08]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/05/01 -- 08/16 | nlp-progress 2016/05-07]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/04]]&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Aligns_cos.txt</id>
		<title>文件:Aligns cos.txt</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Aligns_cos.txt"/>
				<updated>2017-02-23T09:10:58Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2017-02-23T04:29:07Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：/* Daily Report */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=NLP Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
&lt;br /&gt;
* Yang Feng (冯洋)&lt;br /&gt;
* Jiyuan Zhang （张记袁）&lt;br /&gt;
* Aodong Li (李傲冬)&lt;br /&gt;
* Andi Zhang (张安迪)&lt;br /&gt;
* Shiyue Zhang (张诗悦)&lt;br /&gt;
* Li Gu (古丽)&lt;br /&gt;
* Peilun Xiao (肖培伦)&lt;br /&gt;
&lt;br /&gt;
===Former Members===&lt;br /&gt;
* '''Chao Xing (邢超)'''     :  FreeNeb&lt;br /&gt;
* '''Rong Liu (刘荣)'''      :  优酷&lt;br /&gt;
* '''Xiaoxi Wang (王晓曦)''' :  图灵机器人&lt;br /&gt;
* '''Xi Ma (马习)'''         :  清华大学研究生&lt;br /&gt;
* '''Tianyi Luo (骆天一)'''  ： phd candidate in University of California Santa Cruz&lt;br /&gt;
* '''Qixin Wang (王琪鑫)'''  :  MA candidate in University of California&lt;br /&gt;
* '''DongXu Zhang (张东旭)''': --&lt;br /&gt;
* '''Yiqiao Pan (潘一桥)'''  ： MA candidate in University of Sydney &lt;br /&gt;
* '''Shiyao Li （李诗瑶）''' :  BUPT&lt;br /&gt;
* '''Aiting Liu (刘艾婷)'''  :  BUPT&lt;br /&gt;
&lt;br /&gt;
==Work Progress==&lt;br /&gt;
===Daily Report===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Person  !! start!! leave !! hours ||status&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/5&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for ACL paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/6&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for ACL paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/7&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for ACL paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/8&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||11:30 || 20:00|| 6+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/9&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/10&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/11&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||14:30 || 20:00|| 5+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/13&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/14&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/15&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/16&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/17&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/18&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/19&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* prepare for IJCAI paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/21&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:30 || 20:00|| 9+||&lt;br /&gt;
* found the reason of loss not go down&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/22&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:00 || 20:00|| 9+||&lt;br /&gt;
* use cos to computer alignments, and loss can go down&lt;br /&gt;
* replace original attention with cos attention, partly and fully train&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Time Off Table===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Yang Feng !! Jiyuan Zhang &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Past progress==&lt;br /&gt;
[[nlp-progress 2017/01]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/12]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/11]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/10]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/09]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/08]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/05/01 -- 08/16 | nlp-progress 2016/05-07]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/04]]&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Schedule</id>
		<title>Schedule</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Schedule"/>
				<updated>2017-02-23T04:20:56Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangsy：/* Daily Report */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=NLP Schedule=&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
&lt;br /&gt;
===Current Members===&lt;br /&gt;
&lt;br /&gt;
* Yang Feng (冯洋)&lt;br /&gt;
* Jiyuan Zhang （张记袁）&lt;br /&gt;
* Aodong Li (李傲冬)&lt;br /&gt;
* Andi Zhang (张安迪)&lt;br /&gt;
* Shiyue Zhang (张诗悦)&lt;br /&gt;
* Li Gu (古丽)&lt;br /&gt;
* Peilun Xiao (肖培伦)&lt;br /&gt;
&lt;br /&gt;
===Former Members===&lt;br /&gt;
* '''Chao Xing (邢超)'''     :  FreeNeb&lt;br /&gt;
* '''Rong Liu (刘荣)'''      :  优酷&lt;br /&gt;
* '''Xiaoxi Wang (王晓曦)''' :  图灵机器人&lt;br /&gt;
* '''Xi Ma (马习)'''         :  清华大学研究生&lt;br /&gt;
* '''Tianyi Luo (骆天一)'''  ： phd candidate in University of California Santa Cruz&lt;br /&gt;
* '''Qixin Wang (王琪鑫)'''  :  MA candidate in University of California&lt;br /&gt;
* '''DongXu Zhang (张东旭)''': --&lt;br /&gt;
* '''Yiqiao Pan (潘一桥)'''  ： MA candidate in University of Sydney &lt;br /&gt;
* '''Shiyao Li （李诗瑶）''' :  BUPT&lt;br /&gt;
* '''Aiting Liu (刘艾婷)'''  :  BUPT&lt;br /&gt;
&lt;br /&gt;
==Work Progress==&lt;br /&gt;
===Daily Report===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Person  !! start!! leave !! hours ||status&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/5&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:00 || 20:00|| 9+||&lt;br /&gt;
* prepare for ACL paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017/2/5&lt;br /&gt;
|Andy Zhang|| || || || &lt;br /&gt;
|-&lt;br /&gt;
|Shiyue Zhang ||9:00 || 20:00|| 9+||&lt;br /&gt;
* prepare for ACL paper&lt;br /&gt;
|-&lt;br /&gt;
|Peilun Xiao || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|Guli || || || ||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Time Off Table===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Yang Feng !! Jiyuan Zhang &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Past progress==&lt;br /&gt;
[[nlp-progress 2017/01]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/12]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/11]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/10]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/09]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/08]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/05/01 -- 08/16 | nlp-progress 2016/05-07]]&lt;br /&gt;
&lt;br /&gt;
[[nlp-progress 2016/04]]&lt;/div&gt;</summary>
		<author><name>Zhangsy</name></author>	</entry>

	</feed>