<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://index.cslt.org/mediawiki/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="zh-cn">
		<id>http://index.cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Yujiawei</id>
		<title>cslt Wiki - 用户贡献 [zh-cn]</title>
		<link rel="self" type="application/atom+xml" href="http://index.cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Yujiawei"/>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E7%89%B9%E6%AE%8A:%E7%94%A8%E6%88%B7%E8%B4%A1%E7%8C%AE/Yujiawei"/>
		<updated>2026-05-06T19:29:30Z</updated>
		<subtitle>用户贡献</subtitle>
		<generator>MediaWiki 1.23.3</generator>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2018</id>
		<title>2018</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2018"/>
				<updated>2019-10-04T12:19:46Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：/* VPR */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;2018-2019&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ASR ==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:语音识别系统.pdf  | 181107-吴嘉瑶-Overview of ASR]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/38/Unsupervised_pre-training_for_speech_recognition.pdf  190515-董文伟-Unsupervised_pre-training_for_speech_recognition]]&lt;br /&gt;
&lt;br /&gt;
==VPR==&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/4a/Yjw_SRE_pre.pdf 181107-于嘉威-Overview of VPR]&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1808.00158.pdf 181114-VPR from raw waveform]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:190306-zy-report.pptx | 190306-张阳 experiments report]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/82/I-vector_representation_based_on_GMM_and_DNN.pdf 190418-齐诏娣-I-vector_representation_based_on_GMM_and_DNN]&lt;br /&gt;
&lt;br /&gt;
==LRE==&lt;br /&gt;
&lt;br /&gt;
*[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/80/Zero-resource_LID.pdf  190529-于嘉威-Zero-resource-LID]]&lt;br /&gt;
&lt;br /&gt;
==Scoring==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5c/190117-DWW-Scoring.pptx  190117-董文伟-Overview of Scoring]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bc/Kandeng-English-scoring.pdf  190425-邓侃-English Evaluation techniques]]&lt;br /&gt;
&lt;br /&gt;
==Text generation==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1803.07133.pdf Overview-2018-Neural Text Generation: Past, Present and Beyond]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conversational system==&lt;br /&gt;
*[https://arxiv.org/pdf/1809.08267.pdf Overview-2018-Neural Approaches to Conversational AI: Question Answering, Task-Oriented Dialogue and Chatbots: A Unified View] [https://www.microsoft.com/en-us/research/uploads/prod/2018/07/neural-approaches-to-conversational-AI.pdf slides]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Deep architecture and mechanism==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1510.00149.pdf 181114-deep compression]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:Tensor factorization neural net.pdf | 181212-何丹-Tensor factorization neural net]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:Ensemble_2019.5.8.pdf | 190508-吴嘉瑶-ensemble of NN]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:Knowledge_distillation_19.5.29.pdf | 190529-吴嘉瑶-knowledge distillation]]&lt;br /&gt;
&lt;br /&gt;
==Learning theory==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181205 Meta-Learning and Zero-Shot Learning JXQ.pdf | 181205 姜修齐 Meta-Learning and Zero-Shot Learning]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Platform and tool==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181116-张阳-Conda_&amp;amp;_Python.pdf | 181116-张阳-Conda &amp;amp; Python]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181117-张阳-Linux.pdf | 181117-张阳-Linux]]&lt;br /&gt;
&lt;br /&gt;
*[https://pan.baidu.com/s/13qf-GqOSE4DK7q5VjbtWNA    PyTorch 1.0 - Bringing research and production together Presentation]&lt;br /&gt;
&lt;br /&gt;
==NLP language model==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/07/Bert%E7%AE%80%E4%BB%8B.pdf   Bert模型简介]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6f/Punc_prediction_%E6%80%BB%E7%BB%93.pdf  bert based punctuation_prediction 实验总结]&lt;br /&gt;
&lt;br /&gt;
==Medical Image==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/b1/%E7%AD%94%E8%BE%A92.pdf   190522-刘逸博-基于人工智能的乳腺癌诊断]&lt;br /&gt;
&lt;br /&gt;
==User Guide of Fault Detection==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/32/How_to_let_the_fault_detection_work_on_your_Raspberry_Pi.pdf  190802-孙浩然-异常检测树莓派配置教程]&lt;br /&gt;
&lt;br /&gt;
==User Guide of Cry Detection==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/8c/Cry_detection_guide.pdf 190918-武烜宇-基于svm模型的哭声检测使用指南]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Yjw_SRE_pre.pdf</id>
		<title>文件:Yjw SRE pre.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Yjw_SRE_pre.pdf"/>
				<updated>2019-10-04T12:18:04Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-05-31</id>
		<title>2019-05-31</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-05-31"/>
				<updated>2019-05-31T00:29:16Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! This Week !! Next Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
*prepare for the weekly report&lt;br /&gt;
*made a researh of different sampling method&lt;br /&gt;
||&lt;br /&gt;
* evaluate the model based on &amp;quot;frame accuracy&amp;quot;&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* prepare for the weekly report&lt;br /&gt;
* do some experiment about adapt domain &lt;br /&gt;
|| &lt;br /&gt;
* continue to explore about adapt domain&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* prepare for the weekly report.&lt;br /&gt;
* the in-set experiment for different test utterance length(1s, 3s) have been completed.&lt;br /&gt;
|| &lt;br /&gt;
* test the verification results.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
* try speaker level scoring&lt;br /&gt;
* train infogan on big dataset&lt;br /&gt;
|| &lt;br /&gt;
* test infogan and use different native background data train infogan&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xueyi Wang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ziya Zhou&lt;br /&gt;
|| &lt;br /&gt;
*  &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|kaicheng li&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haolin Chen&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haoran Sun&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-05-31</id>
		<title>2019-05-31</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-05-31"/>
				<updated>2019-05-30T23:59:04Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! This Week !! Next Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
*prepare for the weekly report&lt;br /&gt;
*made a researh of different sampling method&lt;br /&gt;
||&lt;br /&gt;
* evaluate the model based on &amp;quot;frame accuracy&amp;quot;&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* prepare for the weekly report.&lt;br /&gt;
* the in-set experiment for different test utterance length(1s, 3s) have been completed.&lt;br /&gt;
|| &lt;br /&gt;
* test performance of different enroll utterance length and verification results.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xueyi Wang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ziya Zhou&lt;br /&gt;
|| &lt;br /&gt;
*  &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|kaicheng li&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haolin Chen&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haoran Sun&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2018</id>
		<title>2018</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2018"/>
				<updated>2019-05-29T13:20:48Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;2018-2019&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ASR ==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-ASR-WJY.pptx | 181107-吴嘉瑶-Overview of ASR]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/38/Unsupervised_pre-training_for_speech_recognition.pdf  190515-董文伟-Unsupervised_pre-training_for_speech_recognition]]&lt;br /&gt;
&lt;br /&gt;
==VPR==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-SRE-YJW.pptx | 181107-于嘉威-Overview of VPR]]&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1808.00158.pdf 181114-VPR from raw waveform]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:190306-zy-report.pptx | 190306-张阳 experiments report]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/82/I-vector_representation_based_on_GMM_and_DNN.pdf 190418-齐诏娣-I-vector_representation_based_on_GMM_and_DNN]&lt;br /&gt;
&lt;br /&gt;
==LRE==&lt;br /&gt;
&lt;br /&gt;
*[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/80/Zero-resource_LID.pdf  190529-于嘉威-Zero-resource-LID]]&lt;br /&gt;
&lt;br /&gt;
==Scoring==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5c/190117-DWW-Scoring.pptx  190117-董文伟-Overview of Scoring]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bc/Kandeng-English-scoring.pdf  190425-邓侃-English Evaluation techniques]]&lt;br /&gt;
&lt;br /&gt;
==Text generation==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1803.07133.pdf Overview-2018-Neural Text Generation: Past, Present and Beyond]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conversational system==&lt;br /&gt;
*[https://arxiv.org/pdf/1809.08267.pdf Overview-2018-Neural Approaches to Conversational AI: Question Answering, Task-Oriented Dialogue and Chatbots: A Unified View] [https://www.microsoft.com/en-us/research/uploads/prod/2018/07/neural-approaches-to-conversational-AI.pdf slides]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Deep architecture and mechanism==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1510.00149.pdf 181114-deep compression]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:Tensor factorization neural net.pdf | 181212-何丹-Tensor factorization neural net]]&lt;br /&gt;
&lt;br /&gt;
==Learning theory==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181205 Meta-Learning and Zero-Shot Learning JXQ.pdf | 181205 姜修齐 Meta-Learning and Zero-Shot Learning]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Platform and tool==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181116-张阳-Conda_&amp;amp;_Python.pdf | 181116-张阳-Conda &amp;amp; Python]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181117-张阳-Linux.pdf | 181117-张阳-Linux]]&lt;br /&gt;
&lt;br /&gt;
*[https://pan.baidu.com/s/13qf-GqOSE4DK7q5VjbtWNA    PyTorch 1.0 - Bringing research and production together Presentation]&lt;br /&gt;
&lt;br /&gt;
==NLP language model==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/07/Bert%E7%AE%80%E4%BB%8B.pdf   Bert模型简介]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6f/Punc_prediction_%E6%80%BB%E7%BB%93.pdf  bert based punctuation_prediction 实验总结]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2018</id>
		<title>2018</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2018"/>
				<updated>2019-05-29T13:20:30Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;2018-2019&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ASR ==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-ASR-WJY.pptx | 181107-吴嘉瑶-Overview of ASR]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/38/Unsupervised_pre-training_for_speech_recognition.pdf  190515-董文伟-Unsupervised_pre-training_for_speech_recognition]]&lt;br /&gt;
&lt;br /&gt;
==VPR==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-SRE-YJW.pptx | 181107-于嘉威-Overview of VPR]]&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1808.00158.pdf 181114-VPR from raw waveform]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:190306-zy-report.pptx | 190306-张阳 experiments report]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/82/I-vector_representation_based_on_GMM_and_DNN.pdf 190418-齐诏娣-I-vector_representation_based_on_GMM_and_DNN]&lt;br /&gt;
&lt;br /&gt;
==LRE==&lt;br /&gt;
&lt;br /&gt;
*[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/80/Zero-resource_LID.pdf  190529-于嘉威-Zero-resource-LID]]*&lt;br /&gt;
&lt;br /&gt;
==Scoring==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5c/190117-DWW-Scoring.pptx  190117-董文伟-Overview of Scoring]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bc/Kandeng-English-scoring.pdf  190425-邓侃-English Evaluation techniques]]&lt;br /&gt;
&lt;br /&gt;
==Text generation==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1803.07133.pdf Overview-2018-Neural Text Generation: Past, Present and Beyond]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conversational system==&lt;br /&gt;
*[https://arxiv.org/pdf/1809.08267.pdf Overview-2018-Neural Approaches to Conversational AI: Question Answering, Task-Oriented Dialogue and Chatbots: A Unified View] [https://www.microsoft.com/en-us/research/uploads/prod/2018/07/neural-approaches-to-conversational-AI.pdf slides]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Deep architecture and mechanism==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1510.00149.pdf 181114-deep compression]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:Tensor factorization neural net.pdf | 181212-何丹-Tensor factorization neural net]]&lt;br /&gt;
&lt;br /&gt;
==Learning theory==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181205 Meta-Learning and Zero-Shot Learning JXQ.pdf | 181205 姜修齐 Meta-Learning and Zero-Shot Learning]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Platform and tool==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181116-张阳-Conda_&amp;amp;_Python.pdf | 181116-张阳-Conda &amp;amp; Python]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181117-张阳-Linux.pdf | 181117-张阳-Linux]]&lt;br /&gt;
&lt;br /&gt;
*[https://pan.baidu.com/s/13qf-GqOSE4DK7q5VjbtWNA    PyTorch 1.0 - Bringing research and production together Presentation]&lt;br /&gt;
&lt;br /&gt;
==NLP language model==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/07/Bert%E7%AE%80%E4%BB%8B.pdf   Bert模型简介]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6f/Punc_prediction_%E6%80%BB%E7%BB%93.pdf  bert based punctuation_prediction 实验总结]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2018</id>
		<title>2018</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2018"/>
				<updated>2019-05-29T13:20:12Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;2018-2019&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ASR ==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-ASR-WJY.pptx | 181107-吴嘉瑶-Overview of ASR]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/38/Unsupervised_pre-training_for_speech_recognition.pdf  190515-董文伟-Unsupervised_pre-training_for_speech_recognition]]&lt;br /&gt;
&lt;br /&gt;
==VPR==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-SRE-YJW.pptx | 181107-于嘉威-Overview of VPR]]&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1808.00158.pdf 181114-VPR from raw waveform]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:190306-zy-report.pptx | 190306-张阳 experiments report]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/82/I-vector_representation_based_on_GMM_and_DNN.pdf 190418-齐诏娣-I-vector_representation_based_on_GMM_and_DNN]&lt;br /&gt;
&lt;br /&gt;
==LRE==&lt;br /&gt;
&lt;br /&gt;
*[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/80/Zero-resource_LID.pdf | 190529-于嘉威-Zero-resource-LID]]*&lt;br /&gt;
&lt;br /&gt;
==Scoring==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5c/190117-DWW-Scoring.pptx  190117-董文伟-Overview of Scoring]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bc/Kandeng-English-scoring.pdf  190425-邓侃-English Evaluation techniques]]&lt;br /&gt;
&lt;br /&gt;
==Text generation==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1803.07133.pdf Overview-2018-Neural Text Generation: Past, Present and Beyond]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conversational system==&lt;br /&gt;
*[https://arxiv.org/pdf/1809.08267.pdf Overview-2018-Neural Approaches to Conversational AI: Question Answering, Task-Oriented Dialogue and Chatbots: A Unified View] [https://www.microsoft.com/en-us/research/uploads/prod/2018/07/neural-approaches-to-conversational-AI.pdf slides]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Deep architecture and mechanism==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1510.00149.pdf 181114-deep compression]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:Tensor factorization neural net.pdf | 181212-何丹-Tensor factorization neural net]]&lt;br /&gt;
&lt;br /&gt;
==Learning theory==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181205 Meta-Learning and Zero-Shot Learning JXQ.pdf | 181205 姜修齐 Meta-Learning and Zero-Shot Learning]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Platform and tool==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181116-张阳-Conda_&amp;amp;_Python.pdf | 181116-张阳-Conda &amp;amp; Python]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181117-张阳-Linux.pdf | 181117-张阳-Linux]]&lt;br /&gt;
&lt;br /&gt;
*[https://pan.baidu.com/s/13qf-GqOSE4DK7q5VjbtWNA    PyTorch 1.0 - Bringing research and production together Presentation]&lt;br /&gt;
&lt;br /&gt;
==NLP language model==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/07/Bert%E7%AE%80%E4%BB%8B.pdf   Bert模型简介]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6f/Punc_prediction_%E6%80%BB%E7%BB%93.pdf  bert based punctuation_prediction 实验总结]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2018</id>
		<title>2018</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2018"/>
				<updated>2019-05-29T13:19:47Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;2018-2019&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ASR ==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-ASR-WJY.pptx | 181107-吴嘉瑶-Overview of ASR]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/38/Unsupervised_pre-training_for_speech_recognition.pdf  190515-董文伟-Unsupervised_pre-training_for_speech_recognition]]&lt;br /&gt;
&lt;br /&gt;
==VPR==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-SRE-YJW.pptx | 181107-于嘉威-Overview of VPR]]&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1808.00158.pdf 181114-VPR from raw waveform]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:190306-zy-report.pptx | 190306-张阳 experiments report]]*&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/82/I-vector_representation_based_on_GMM_and_DNN.pdf 190418-齐诏娣-I-vector_representation_based_on_GMM_and_DNN]&lt;br /&gt;
&lt;br /&gt;
==LRE==&lt;br /&gt;
&lt;br /&gt;
*[[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/80/Zero-resource_LID.pdf | 190529-于嘉威-Zero-resource-LID]]*&lt;br /&gt;
&lt;br /&gt;
==Scoring==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5c/190117-DWW-Scoring.pptx  190117-董文伟-Overview of Scoring]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bc/Kandeng-English-scoring.pdf  190425-邓侃-English Evaluation techniques]]&lt;br /&gt;
&lt;br /&gt;
==Text generation==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1803.07133.pdf Overview-2018-Neural Text Generation: Past, Present and Beyond]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conversational system==&lt;br /&gt;
*[https://arxiv.org/pdf/1809.08267.pdf Overview-2018-Neural Approaches to Conversational AI: Question Answering, Task-Oriented Dialogue and Chatbots: A Unified View] [https://www.microsoft.com/en-us/research/uploads/prod/2018/07/neural-approaches-to-conversational-AI.pdf slides]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Deep architecture and mechanism==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1510.00149.pdf 181114-deep compression]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:Tensor factorization neural net.pdf | 181212-何丹-Tensor factorization neural net]]&lt;br /&gt;
&lt;br /&gt;
==Learning theory==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181205 Meta-Learning and Zero-Shot Learning JXQ.pdf | 181205 姜修齐 Meta-Learning and Zero-Shot Learning]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Platform and tool==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181116-张阳-Conda_&amp;amp;_Python.pdf | 181116-张阳-Conda &amp;amp; Python]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181117-张阳-Linux.pdf | 181117-张阳-Linux]]&lt;br /&gt;
&lt;br /&gt;
*[https://pan.baidu.com/s/13qf-GqOSE4DK7q5VjbtWNA    PyTorch 1.0 - Bringing research and production together Presentation]&lt;br /&gt;
&lt;br /&gt;
==NLP language model==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/07/Bert%E7%AE%80%E4%BB%8B.pdf   Bert模型简介]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6f/Punc_prediction_%E6%80%BB%E7%BB%93.pdf  bert based punctuation_prediction 实验总结]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Zero-resource_LID.pdf</id>
		<title>文件:Zero-resource LID.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Zero-resource_LID.pdf"/>
				<updated>2019-05-29T13:16:42Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Zero-resource_LID_5.29.pptx</id>
		<title>文件:Zero-resource LID 5.29.pptx</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Zero-resource_LID_5.29.pptx"/>
				<updated>2019-05-29T13:15:34Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-05-24</id>
		<title>2019-05-24</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-05-24"/>
				<updated>2019-05-23T12:31:45Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! This Week !! Next Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Compiling the data of planning and training set, unifying data formats&lt;br /&gt;
|| &lt;br /&gt;
* Re-plan training set using planning results&lt;br /&gt;
* Doing summary work&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* Done part of ivector and dvector in-set LID experiment.&lt;br /&gt;
* Experimented different test utterance lengths(1s, 3s, full length) for zero-resource LID.&lt;br /&gt;
|| &lt;br /&gt;
* Test the in-set language for different test utterance length(1s, 3s).&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
* -&lt;br /&gt;
|| &lt;br /&gt;
* Try speaker level correlation.&lt;br /&gt;
* Collect different language background speaker's English data to train info-GAN.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xueyi Wang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ziya Zhou&lt;br /&gt;
|| &lt;br /&gt;
*  &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|kaicheng li&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haolin Chen&lt;br /&gt;
||&lt;br /&gt;
* Maths: statistical inference, integer programming&lt;br /&gt;
|| &lt;br /&gt;
* Continue learning statistics&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haoran Sun&lt;br /&gt;
||&lt;br /&gt;
* Completed the testing table of asr engine&lt;br /&gt;
|| &lt;br /&gt;
* Test the concurrence of asr engine&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-05-24</id>
		<title>2019-05-24</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-05-24"/>
				<updated>2019-05-23T12:29:36Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! This Week !! Next Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Compiling the data of planning and training set, unifying data formats&lt;br /&gt;
|| &lt;br /&gt;
* Re-plan training set using planning results&lt;br /&gt;
* Doing summary work&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* Test part of ivector and dvector in-set LID experiment.&lt;br /&gt;
* Experiment different test utterance lengths(1s, 3s, full length) for zero-resource LID.&lt;br /&gt;
|| &lt;br /&gt;
* Test the in-set language for different test utterance length(1s, 3s).&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
* -&lt;br /&gt;
|| &lt;br /&gt;
* Try speaker level correlation.&lt;br /&gt;
* Collect different language background speaker's English data to train info-GAN.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xueyi Wang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ziya Zhou&lt;br /&gt;
|| &lt;br /&gt;
*  &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|kaicheng li&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haolin Chen&lt;br /&gt;
||&lt;br /&gt;
* Maths: statistical inference, integer programming&lt;br /&gt;
|| &lt;br /&gt;
* Continue learning statistics&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haoran Sun&lt;br /&gt;
||&lt;br /&gt;
* Completed the testing table of asr engine&lt;br /&gt;
|| &lt;br /&gt;
* Test the concurrence of asr engine&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-05-17</id>
		<title>2019-05-17</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-05-17"/>
				<updated>2019-05-16T23:57:39Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! This Week !! Next Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Recompiled the 1031k dataset and retrained the Seq2seq model.&lt;br /&gt;
|| &lt;br /&gt;
* Justify the loss and fix the rhyme part.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
*did some experiments of adaboost on ASR and have some basic results&lt;br /&gt;
*working on the speechbook&lt;br /&gt;
||&lt;br /&gt;
*keep on doing experiments on larger data_set.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* Do experiment on Mult-scale information.&lt;br /&gt;
|| &lt;br /&gt;
* Do experiment on Multi-language BN&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* Did data preparation for zero-resource language recognition&lt;br /&gt;
|| &lt;br /&gt;
* Continue to improve the experiment of zero-resource recognition &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xueyi Wang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ziya Zhou&lt;br /&gt;
|| &lt;br /&gt;
* Revised the speech book&lt;br /&gt;
* Made a list of 1,000 celebrities and downloaded videos of over 30 of them.&lt;br /&gt;
||&lt;br /&gt;
* Continue downloading and editing videos of another 150 celebrities.  &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|kaicheng li&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haolin Chen&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haoran Sun&lt;br /&gt;
||&lt;br /&gt;
* did some tests on  Fn asr engine&lt;br /&gt;
* corrected some errors in the speech-book&lt;br /&gt;
|| &lt;br /&gt;
* continue testing  on the robustness of Fn asr engine&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-05-17</id>
		<title>2019-05-17</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-05-17"/>
				<updated>2019-05-16T23:57:21Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! This Week !! Next Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Recompiled the 1031k dataset and retrained the Seq2seq model.&lt;br /&gt;
|| &lt;br /&gt;
* Justify the loss and fix the rhyme part.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
*did some experiments of adaboost on ASR and have some basic results&lt;br /&gt;
*working on the speechbook&lt;br /&gt;
||&lt;br /&gt;
*keep on doing experiments on larger data_set.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* Do experiment on Mult-scale information.&lt;br /&gt;
|| &lt;br /&gt;
* Do experiment on Multi-language BN&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* did data preparation for zero-resource language recognition&lt;br /&gt;
|| &lt;br /&gt;
* continue to improve the experiment of zero-resource recognition &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xueyi Wang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ziya Zhou&lt;br /&gt;
|| &lt;br /&gt;
* Revised the speech book&lt;br /&gt;
* Made a list of 1,000 celebrities and downloaded videos of over 30 of them.&lt;br /&gt;
||&lt;br /&gt;
* Continue downloading and editing videos of another 150 celebrities.  &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|kaicheng li&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haolin Chen&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haoran Sun&lt;br /&gt;
||&lt;br /&gt;
* did some tests on  Fn asr engine&lt;br /&gt;
* corrected some errors in the speech-book&lt;br /&gt;
|| &lt;br /&gt;
* continue testing  on the robustness of Fn asr engine&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2018</id>
		<title>2018</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2018"/>
				<updated>2019-05-15T11:46:43Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;2018-2019&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ASR ==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-ASR-WJY.pptx | 181107-吴嘉瑶-Overview of ASR]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/38/Unsupervised_pre-training_for_speech_recognition.pdf  190515-董文伟-Unsupervised_pre-training_for_speech_recognition]]&lt;br /&gt;
&lt;br /&gt;
==VPR==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181107-SRE-YJW.pptx | 181107-于嘉威-Overview of VPR]]&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1808.00158.pdf 181114-VPR from raw waveform]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:190306-zy-report.pptx | 190306-张阳 experiments report]]*&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/82/I-vector_representation_based_on_GMM_and_DNN.pdf 190418-齐诏娣-I-vector_representation_based_on_GMM_and_DNN]&lt;br /&gt;
&lt;br /&gt;
==Scoring==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/5c/190117-DWW-Scoring.pptx  190117-董文伟-Overview of Scoring]]&lt;br /&gt;
&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/b/bc/Kandeng-English-scoring.pdf  190425-邓侃-English Evaluation techniques]]&lt;br /&gt;
&lt;br /&gt;
==Text generation==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1803.07133.pdf Overview-2018-Neural Text Generation: Past, Present and Beyond]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conversational system==&lt;br /&gt;
*[https://arxiv.org/pdf/1809.08267.pdf Overview-2018-Neural Approaches to Conversational AI: Question Answering, Task-Oriented Dialogue and Chatbots: A Unified View] [https://www.microsoft.com/en-us/research/uploads/prod/2018/07/neural-approaches-to-conversational-AI.pdf slides]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Deep architecture and mechanism==&lt;br /&gt;
&lt;br /&gt;
*[https://arxiv.org/pdf/1510.00149.pdf 181114-deep compression]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:Tensor factorization neural net.pdf | 181212-何丹-Tensor factorization neural net]]&lt;br /&gt;
&lt;br /&gt;
==Learning theory==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181205 Meta-Learning and Zero-Shot Learning JXQ.pdf | 181205 姜修齐 Meta-Learning and Zero-Shot Learning]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Platform and tool==&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181116-张阳-Conda_&amp;amp;_Python.pdf | 181116-张阳-Conda &amp;amp; Python]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:181117-张阳-Linux.pdf | 181117-张阳-Linux]]&lt;br /&gt;
&lt;br /&gt;
*[https://pan.baidu.com/s/13qf-GqOSE4DK7q5VjbtWNA    PyTorch 1.0 - Bringing research and production together Presentation]&lt;br /&gt;
&lt;br /&gt;
==NLP language model==&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/07/Bert%E7%AE%80%E4%BB%8B.pdf   Bert模型简介]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6f/Punc_prediction_%E6%80%BB%E7%BB%93.pdf  bert based punctuation_prediction 实验总结]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Unsupervised_pre-training_for_speech_recognition.pdf</id>
		<title>文件:Unsupervised pre-training for speech recognition.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Unsupervised_pre-training_for_speech_recognition.pdf"/>
				<updated>2019-05-15T11:43:10Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Unsupervised_pre-training_for_speech_recognition.pptx</id>
		<title>文件:Unsupervised pre-training for speech recognition.pptx</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Unsupervised_pre-training_for_speech_recognition.pptx"/>
				<updated>2019-05-15T11:41:09Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Phonetic_attention.pdf</id>
		<title>文件:Phonetic attention.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Phonetic_attention.pdf"/>
				<updated>2019-05-10T06:25:54Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：Yujiawei上传“文件:Phonetic attention.pdf”的新版本&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-05-10</id>
		<title>2019-05-10</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-05-10"/>
				<updated>2019-05-10T06:19:19Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! This Week !! Next Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Merge code with Yibo and clean up current Vivi&lt;br /&gt;
|| &lt;br /&gt;
* Polish up baselines and add extra regularizations to seq2seq loss&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
*made a summary of formal experiment&lt;br /&gt;
*prepare for the weekly report&lt;br /&gt;
*make a further try of Adaboost on ASR on chain model(to get rid of the influence of priors,no priors in chain model)&lt;br /&gt;
||&lt;br /&gt;
*try more experiments on Adaboost ASR&lt;br /&gt;
*modify the speech book&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* Summarized the previous experiments and explored new methods.&lt;br /&gt;
|| &lt;br /&gt;
* Complete the experiment of Mult-scale information and Modify the speech book&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* Continue  phonetic attention experiment and write the experiment report.&lt;br /&gt;
|| &lt;br /&gt;
* ... &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
* use frame fbank feature to train infogan, draw t-sne&lt;br /&gt;
|| &lt;br /&gt;
*use gmm to learn the distribution of different dataset&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xueyi Wang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ziya Zhou&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
*   &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|kaicheng li&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haolin Chen&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-05-10</id>
		<title>2019-05-10</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-05-10"/>
				<updated>2019-05-10T06:18:54Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! This Week !! Next Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Merge code with Yibo and clean up current Vivi&lt;br /&gt;
|| &lt;br /&gt;
* Polish up baselines and add extra regularizations to seq2seq loss&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
*made a summary of formal experiment&lt;br /&gt;
*prepare for the weekly report&lt;br /&gt;
*make a further try of Adaboost on ASR on chain model(to get rid of the influence of priors,no priors in chain model)&lt;br /&gt;
||&lt;br /&gt;
*try more experiments on Adaboost ASR&lt;br /&gt;
*modify the speech book&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* Summarized the previous experiments and explored new methods.&lt;br /&gt;
|| &lt;br /&gt;
* Complete the experiment of Mult-scale information and Modify the speech book&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* Continue  phonetic attention exoeriment and write the experiment report.&lt;br /&gt;
|| &lt;br /&gt;
* ... &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
* use frame fbank feature to train infogan, draw t-sne&lt;br /&gt;
|| &lt;br /&gt;
*use gmm to learn the distribution of different dataset&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xueyi Wang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ziya Zhou&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
*   &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|kaicheng li&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Haolin Chen&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Phonetic_attention.pdf</id>
		<title>文件:Phonetic attention.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Phonetic_attention.pdf"/>
				<updated>2019-05-10T05:52:49Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-02-20</id>
		<title>2019-02-20</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-02-20"/>
				<updated>2019-02-20T04:35:15Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* Reconstructed the model on pytorch.&lt;br /&gt;
||&lt;br /&gt;
* 1.Tune the parameters and train a model with good results.&lt;br /&gt;
* 2.Add planning and polishing procedure.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
*finished several experiments on node-sparseness&lt;br /&gt;
||&lt;br /&gt;
*keep on doing experiments&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* finished x-vector system&lt;br /&gt;
|| &lt;br /&gt;
* improve the system ，because the results is bad.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* Familiar with the code of attention experiment.&lt;br /&gt;
* Modify the code to achieve phonetic attention.&lt;br /&gt;
|| &lt;br /&gt;
*  finish the phonetic attention experiment.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
*1,use the ASR test_data set ref.txt to test the original bert and then compare with the fine-tuned Bert.&lt;br /&gt;
*2,compare the difference between hyp.text which comes from ASR test results, and investigate how to mask the error in the sentence of the ASR result.&lt;br /&gt;
*3,set a rule among all the result(hyp.text) to find out the error to be masked.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
*Continue to study the time complexity of TT-decomposition&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
*ivector+fbank as feature compare with fbank in GOP&lt;br /&gt;
|| &lt;br /&gt;
*read papers, find some new speaker adaptation methods &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-02-20</id>
		<title>2019-02-20</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-02-20"/>
				<updated>2019-02-20T02:40:33Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
*  &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-02-04</id>
		<title>2019-02-04</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-02-04"/>
				<updated>2019-01-25T02:49:58Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;$ Happy new year :) $&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Day home !! Day back !! Life Tracking&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
1月25日&lt;br /&gt;
||&lt;br /&gt;
2月13日&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
1月31日&lt;br /&gt;
|| &lt;br /&gt;
预计2月13日（抢票中）&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
1月30号&lt;br /&gt;
|| &lt;br /&gt;
预计2月11号（不过还没抢到票）&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
1月30号&lt;br /&gt;
|| &lt;br /&gt;
预计2月14号&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-02-04</id>
		<title>2019-02-04</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-02-04"/>
				<updated>2019-01-24T04:30:53Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;$ Happy new year :) $&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Day home !! Day back !! Life Tracking&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
1月30号&lt;br /&gt;
|| &lt;br /&gt;
预计2月10号（不过还没抢到票）&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Wenwei Dong&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-01-23</id>
		<title>2019-01-23</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-01-23"/>
				<updated>2019-01-23T04:13:37Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Designed a better code structure for further experiments.&lt;br /&gt;
* Improved vivi2.0 and made some adjustments to .sh script.&lt;br /&gt;
|| &lt;br /&gt;
* Build codes under the new structure.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
* do experiments on node_sparseness and update it on cvss&lt;br /&gt;
* re-label some data&lt;br /&gt;
||&lt;br /&gt;
* keep on doing experiments on pruning&lt;br /&gt;
* get familiar with pytorch &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* wrote a tensorflow learning document, and I have not completed it.&lt;br /&gt;
* read some papers about attention and find some attention code in GitHub.&lt;br /&gt;
|| &lt;br /&gt;
* try to run these attention code figure out mechanism of this code.&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
*1,figured out how the Bert model create the pretraining data and do the pretraining.&lt;br /&gt;
*2,try to use the Bert to do the error correction of a text sentence.&lt;br /&gt;
*3,re-label some ASR data&lt;br /&gt;
*4,Test vivi2.0 model&lt;br /&gt;
|| &lt;br /&gt;
*Construct a Text sentence error correction model&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
*Do experiments on comparing test time and update it on cvss&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-01-23</id>
		<title>2019-01-23</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-01-23"/>
				<updated>2019-01-23T04:13:11Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Designed a better code structure for further experiments.&lt;br /&gt;
* Improved vivi2.0 and made some adjustments to .sh script.&lt;br /&gt;
|| &lt;br /&gt;
* Build codes under the new structure.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
* do experiments on node_sparseness and update it on cvss&lt;br /&gt;
* re-label some data&lt;br /&gt;
||&lt;br /&gt;
* keep on doing experiments on pruning&lt;br /&gt;
* get familiar with pytorch &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* wrote a tensorflow learning document, and I have not completed it.&lt;br /&gt;
* read some papers about attention and find some attention code in GitHub.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* try to run these attention code figure out mechanism of this code.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
*1,figured out how the Bert model create the pretraining data and do the pretraining.&lt;br /&gt;
*2,try to use the Bert to do the error correction of a text sentence.&lt;br /&gt;
*3,re-label some ASR data&lt;br /&gt;
*4,Test vivi2.0 model&lt;br /&gt;
|| &lt;br /&gt;
*Construct a Text sentence error correction model&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
*Do experiments on comparing test time and update it on cvss&lt;br /&gt;
&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow%E5%AD%A6%E4%B9%A0%E6%96%87%E6%A1%A3-ing........pdf</id>
		<title>文件:Tensorflow学习文档-ing........pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Tensorflow%E5%AD%A6%E4%B9%A0%E6%96%87%E6%A1%A3-ing........pdf"/>
				<updated>2019-01-23T02:36:30Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Speech_book</id>
		<title>Speech book</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Speech_book"/>
				<updated>2019-01-22T12:25:43Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：/* 情绪识别（嘉威） */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''《语音识别基本法》'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113a speech book.pdf | Version 20190113a]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113 speech book.pdf | Version 20190113]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181226 speech book.pdf | Version 20181226]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181225 speech book.pdf | Version 20181225]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181224 speech book.pdf | Version 20181224]]&lt;br /&gt;
&lt;br /&gt;
*Tex on [https://gitlab.com/tzyll/speech_book GitLab]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=语音识别基础 (阿汤)=&lt;br /&gt;
==语音是什么==&lt;br /&gt;
==语音识别方法==&lt;br /&gt;
==语音识别工具==&lt;br /&gt;
&lt;br /&gt;
=语音识别基本流程（阿汤）=&lt;br /&gt;
==实验先行==&lt;br /&gt;
==前端处理==&lt;br /&gt;
==训练与解码==&lt;br /&gt;
&lt;br /&gt;
=语音识别实际问题=&lt;br /&gt;
==说话人自适应（启明）==&lt;br /&gt;
[[媒体文件:Spk.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/spk-adapt.rar latex]&lt;br /&gt;
&lt;br /&gt;
==噪声对抗与环境鲁棒性（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Noise.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/noise-robust.rar latex]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==新词处理与领域泛化（文强）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/fd/Domain.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3d/Domain_adaptation.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==小语种识别（石颖）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f4/Minority_20190109.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/02/Minority_asr.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==关键词唤醒与嵌入式系统（嘉瑶）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Kws.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/70/Kws.rar Latex]&lt;br /&gt;
&lt;br /&gt;
=前沿课题=&lt;br /&gt;
==说话人识别（蓝天）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/Spk.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==语种识别（诏娣）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Lid.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9e/LID-speekbook.rar latex]&lt;br /&gt;
&lt;br /&gt;
==情绪识别（嘉威）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/28/语音情绪识别.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/78/Ser.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==语音合成（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Tts.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/tts.rar latex]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Ser.rar</id>
		<title>文件:Ser.rar</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Ser.rar"/>
				<updated>2019-01-22T12:24:18Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：Yujiawei上传“文件:Ser.rar”的新版本&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Speech_book</id>
		<title>Speech book</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Speech_book"/>
				<updated>2019-01-22T12:23:02Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：/* 情绪识别（嘉威） */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''《语音识别基本法》'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113a speech book.pdf | Version 20190113a]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113 speech book.pdf | Version 20190113]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181226 speech book.pdf | Version 20181226]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181225 speech book.pdf | Version 20181225]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181224 speech book.pdf | Version 20181224]]&lt;br /&gt;
&lt;br /&gt;
*Tex on [https://gitlab.com/tzyll/speech_book GitLab]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=语音识别基础 (阿汤)=&lt;br /&gt;
==语音是什么==&lt;br /&gt;
==语音识别方法==&lt;br /&gt;
==语音识别工具==&lt;br /&gt;
&lt;br /&gt;
=语音识别基本流程（阿汤）=&lt;br /&gt;
==实验先行==&lt;br /&gt;
==前端处理==&lt;br /&gt;
==训练与解码==&lt;br /&gt;
&lt;br /&gt;
=语音识别实际问题=&lt;br /&gt;
==说话人自适应（启明）==&lt;br /&gt;
[[媒体文件:Spk.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/spk-adapt.rar latex]&lt;br /&gt;
&lt;br /&gt;
==噪声对抗与环境鲁棒性（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Noise.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/noise-robust.rar latex]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==新词处理与领域泛化（文强）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/fd/Domain.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3d/Domain_adaptation.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==小语种识别（石颖）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f4/Minority_20190109.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/02/Minority_asr.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==关键词唤醒与嵌入式系统（嘉瑶）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Kws.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/70/Kws.rar Latex]&lt;br /&gt;
&lt;br /&gt;
=前沿课题=&lt;br /&gt;
==说话人识别（蓝天）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/Spk.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==语种识别（诏娣）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Lid.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9e/LID-speekbook.rar latex]&lt;br /&gt;
&lt;br /&gt;
==情绪识别（嘉威）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/28/语音情绪识别.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Ser.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==语音合成（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Tts.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/tts.rar latex]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Speech_book</id>
		<title>Speech book</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Speech_book"/>
				<updated>2019-01-22T12:22:19Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：/* 情绪识别（嘉威） */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''《语音识别基本法》'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113a speech book.pdf | Version 20190113a]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113 speech book.pdf | Version 20190113]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181226 speech book.pdf | Version 20181226]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181225 speech book.pdf | Version 20181225]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181224 speech book.pdf | Version 20181224]]&lt;br /&gt;
&lt;br /&gt;
*Tex on [https://gitlab.com/tzyll/speech_book GitLab]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=语音识别基础 (阿汤)=&lt;br /&gt;
==语音是什么==&lt;br /&gt;
==语音识别方法==&lt;br /&gt;
==语音识别工具==&lt;br /&gt;
&lt;br /&gt;
=语音识别基本流程（阿汤）=&lt;br /&gt;
==实验先行==&lt;br /&gt;
==前端处理==&lt;br /&gt;
==训练与解码==&lt;br /&gt;
&lt;br /&gt;
=语音识别实际问题=&lt;br /&gt;
==说话人自适应（启明）==&lt;br /&gt;
[[媒体文件:Spk.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/spk-adapt.rar latex]&lt;br /&gt;
&lt;br /&gt;
==噪声对抗与环境鲁棒性（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Noise.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/noise-robust.rar latex]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==新词处理与领域泛化（文强）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/fd/Domain.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/3/3d/Domain_adaptation.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==小语种识别（石颖）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f4/Minority_20190109.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/02/Minority_asr.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==关键词唤醒与嵌入式系统（嘉瑶）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Kws.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/70/Kws.rar Latex]&lt;br /&gt;
&lt;br /&gt;
=前沿课题=&lt;br /&gt;
==说话人识别（蓝天）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/Spk.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==语种识别（诏娣）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Lid.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/9e/LID-speekbook.rar latex]&lt;br /&gt;
&lt;br /&gt;
==情绪识别（嘉威）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/28/语音情绪识别.pdf pdf]&lt;br /&gt;
http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Ser.rar&lt;br /&gt;
&lt;br /&gt;
==语音合成（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Tts.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/tts.rar latex]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Ser.rar</id>
		<title>文件:Ser.rar</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Ser.rar"/>
				<updated>2019-01-22T12:21:09Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E8%AF%AD%E9%9F%B3%E6%83%85%E7%BB%AA%E8%AF%86%E5%88%AB.pdf</id>
		<title>文件:语音情绪识别.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E8%AF%AD%E9%9F%B3%E6%83%85%E7%BB%AA%E8%AF%86%E5%88%AB.pdf"/>
				<updated>2019-01-21T05:33:02Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：Yujiawei上传“文件:语音情绪识别.pdf”的新版本&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-01-16</id>
		<title>2019-01-16</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-01-16"/>
				<updated>2019-01-16T04:27:56Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Focused back on the quatrain generation, thinking of the drawbacks of current model.&lt;br /&gt;
* Tried to weaken the attention mechanism between sentences.&lt;br /&gt;
|| &lt;br /&gt;
* Add VAE into model and try to generate instead of predicting.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
*continue doing sparse-node experiment using the mask method&lt;br /&gt;
||&lt;br /&gt;
*run through the first experiment&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* Finish the speech emotion recognition of speech book.&lt;br /&gt;
* Learning the tensorflow tutorial.&lt;br /&gt;
|| &lt;br /&gt;
* keep learning tensorflow and do some experiment. &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
*1.Run through the bert model. 2.study the details of the bert model.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
*Doing some comparative experiments about TT-decompositions.&lt;br /&gt;
|| &lt;br /&gt;
*Summarize the results of the comparative experiments and initially complete the relevant research on TT-decomposition.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
||&lt;br /&gt;
* exam week.&lt;br /&gt;
|| &lt;br /&gt;
* (I will finish all my exams on Friday night)&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E8%AF%AD%E9%9F%B3%E6%83%85%E7%BB%AA%E8%AF%86%E5%88%AB.pdf</id>
		<title>文件:语音情绪识别.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E8%AF%AD%E9%9F%B3%E6%83%85%E7%BB%AA%E8%AF%86%E5%88%AB.pdf"/>
				<updated>2019-01-13T15:55:24Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：Yujiawei上传“文件:语音情绪识别.pdf”的新版本&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Speech_book</id>
		<title>Speech book</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Speech_book"/>
				<updated>2019-01-13T15:50:55Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：/* 情绪识别（嘉威） */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''《语音识别基本法》'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113a speech book.pdf | Version 20190113a]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113 speech book.pdf | Version 20190113]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181226 speech book.pdf | Version 20181226]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181225 speech book.pdf | Version 20181225]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181224 speech book.pdf | Version 20181224]]&lt;br /&gt;
&lt;br /&gt;
*Tex on [https://gitlab.com/tzyll/speech_book GitLab]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=语音识别基础 (阿汤)=&lt;br /&gt;
==语音是什么==&lt;br /&gt;
==语音识别方法==&lt;br /&gt;
==语音识别工具==&lt;br /&gt;
&lt;br /&gt;
=语音识别基本流程（阿汤）=&lt;br /&gt;
==实验先行==&lt;br /&gt;
==前端处理==&lt;br /&gt;
==训练与解码==&lt;br /&gt;
&lt;br /&gt;
=语音识别实际问题=&lt;br /&gt;
==说话人自适应（启明）==&lt;br /&gt;
[[媒体文件:Spk.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/spk-adapt.rar latex]&lt;br /&gt;
&lt;br /&gt;
==噪声对抗与环境鲁棒性（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Noise.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/noise-robust.rar latex]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==新词处理与领域泛化（文强）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/ef/New_word_du.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==小语种识别（石颖）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f4/Minority_20190109.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/02/Minority_asr.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==关键词唤醒与嵌入式系统（嘉瑶）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Kws.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/70/Kws.rar Latex]&lt;br /&gt;
&lt;br /&gt;
=前沿课题=&lt;br /&gt;
==说话人识别（蓝天）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/ca/Sre.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==语种识别（诏娣）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Lid.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==情绪识别（嘉威）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/28/语音情绪识别.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==语音合成（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Tts.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/tts.rar latex]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/Speech_book</id>
		<title>Speech book</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/Speech_book"/>
				<updated>2019-01-13T15:49:51Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：/* 情绪识别（嘉威） */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''《语音识别基本法》'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113a speech book.pdf | Version 20190113a]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20190113 speech book.pdf | Version 20190113]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181226 speech book.pdf | Version 20181226]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181225 speech book.pdf | Version 20181225]]&lt;br /&gt;
&lt;br /&gt;
*[[媒体文件:20181224 speech book.pdf | Version 20181224]]&lt;br /&gt;
&lt;br /&gt;
*Tex on [https://gitlab.com/tzyll/speech_book GitLab]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=语音识别基础 (阿汤)=&lt;br /&gt;
==语音是什么==&lt;br /&gt;
==语音识别方法==&lt;br /&gt;
==语音识别工具==&lt;br /&gt;
&lt;br /&gt;
=语音识别基本流程（阿汤）=&lt;br /&gt;
==实验先行==&lt;br /&gt;
==前端处理==&lt;br /&gt;
==训练与解码==&lt;br /&gt;
&lt;br /&gt;
=语音识别实际问题=&lt;br /&gt;
==说话人自适应（启明）==&lt;br /&gt;
[[媒体文件:Spk.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/spk-adapt.rar latex]&lt;br /&gt;
&lt;br /&gt;
==噪声对抗与环境鲁棒性（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Noise.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/noise-robust.rar latex]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==新词处理与领域泛化（文强）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/ef/New_word_du.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==小语种识别（石颖）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f4/Minority_20190109.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/02/Minority_asr.rar LaTex]&lt;br /&gt;
&lt;br /&gt;
==关键词唤醒与嵌入式系统（嘉瑶）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Kws.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/7/70/Kws.rar Latex]&lt;br /&gt;
&lt;br /&gt;
=前沿课题=&lt;br /&gt;
==说话人识别（蓝天）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/ca/Sre.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==语种识别（诏娣）==&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/%E6%96%87%E4%BB%B6:Lid.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
==情绪识别（嘉威）==&lt;br /&gt;
http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/28/语音情绪识别.pdf&lt;br /&gt;
&lt;br /&gt;
==语音合成（启明）==&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Tts.pdf|pdf]]&lt;br /&gt;
&lt;br /&gt;
[http://wangd.cslt.org/book/kaldi/tts.rar latex]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E8%AF%AD%E9%9F%B3%E6%83%85%E7%BB%AA%E8%AF%86%E5%88%AB.pdf</id>
		<title>文件:语音情绪识别.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E8%AF%AD%E9%9F%B3%E6%83%85%E7%BB%AA%E8%AF%86%E5%88%AB.pdf"/>
				<updated>2019-01-13T15:47:55Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Phonetic_attention.jpg</id>
		<title>文件:Phonetic attention.jpg</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Phonetic_attention.jpg"/>
				<updated>2019-01-10T07:49:15Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：Yujiawei上传“文件:Phonetic attention.jpg”的新版本&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Phonetic_attention.jpg</id>
		<title>文件:Phonetic attention.jpg</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Phonetic_attention.jpg"/>
				<updated>2019-01-10T07:28:22Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-01-09</id>
		<title>2019-01-09</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-01-09"/>
				<updated>2019-01-09T02:57:25Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
*  &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
* finished the speech book -- [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Kws.pdf kws]&lt;br /&gt;
* doing experiment of node -pruning on WSJ chain model&lt;br /&gt;
||&lt;br /&gt;
* keep on node-pruning research &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* Done for max-margin recipe.&lt;br /&gt;
* write the speech book.&lt;br /&gt;
* learn to use Tensorflow for attention experiment.&lt;br /&gt;
||&lt;br /&gt;
* finish the emotion recognition speech book.&lt;br /&gt;
* Keep learning the Tensorflow and move max-margin experiment to this platform.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
* Compared the inference time, but the results is not well.&lt;br /&gt;
*After TT decomposing the two full-connection layers, it is found that the testing accuracy is very low.&lt;br /&gt;
||&lt;br /&gt;
*According to some problems found in the experiment, continue to do comparative experiments and analyze the reasons.&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Wrote a brief [https://github.com/zyzisyz/VPR-wx-client document] &lt;br /&gt;
* submited the source code to [https://gitlab.com/zyzisyz/nebula-listen GitLab].&lt;br /&gt;
||&lt;br /&gt;
* revise for my school subjects and prepare for the final-term examinations. &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-01-02</id>
		<title>2019-01-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-01-02"/>
				<updated>2019-01-03T04:47:06Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Made further adjustments to code, deleting unnecessary files and uploading generated samples to dir 'predict/results'.&lt;br /&gt;
* More models have been trained and compared.&lt;br /&gt;
* Collated notes on ML book and uploaded them on wiki.&lt;br /&gt;
||&lt;br /&gt;
* Try to train a model generating length-unfixed texts.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
* read some papers about kws and write the chapter before this weekend.&lt;br /&gt;
* carding the TDNN-F structure and count the lines of weights to prune and do value prune first.&lt;br /&gt;
||&lt;br /&gt;
* finish the assigned chapter of Speech book&lt;br /&gt;
* finish the value prune experiment&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* read some papers about DID and other&lt;br /&gt;
* Run language recognition task&lt;br /&gt;
||&lt;br /&gt;
* Continue to complete the language recognition task&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* do some Literature survey on spoken Emotion Recognition for speech book&lt;br /&gt;
* done for extract d-vector and phone vector for max-margin&lt;br /&gt;
||&lt;br /&gt;
* run max-margin recipe for extracted vector&lt;br /&gt;
* Consider how to implement the network structure&lt;br /&gt;
* finishi the speech book of spoken Emotion Recognition &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
*Run through the Thchs30 data, think about the research plan.&lt;br /&gt;
||&lt;br /&gt;
*Get to know every step of the ASR processes, and investigate the RNN LM.&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
*Compared the size of a fully connected layer after tt decomposition through experiment, at the same time compared the train loss, validation loss and test accuracy between the two case. &lt;br /&gt;
||&lt;br /&gt;
*Further compared the inference speed, the precision after retrain.&lt;br /&gt;
*Try to decompose more layers of fully connected layers. &lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Released and published our WeChat app(星云听).&lt;br /&gt;
* cleaned my code &lt;br /&gt;
* wrote document for this project (have not finished yet).&lt;br /&gt;
||&lt;br /&gt;
* continue to write document.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-01-02</id>
		<title>2019-01-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-01-02"/>
				<updated>2019-01-03T04:45:49Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Made further adjustments to code, deleting unnecessary files and uploading generated samples to dir 'predict/results'.&lt;br /&gt;
* More models have been trained and compared.&lt;br /&gt;
* Collated notes on ML book and uploaded them on wiki.&lt;br /&gt;
||&lt;br /&gt;
* Try to train a model generating length-unfixed texts.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
* read some papers about kws and write the chapter before this weekend.&lt;br /&gt;
* carding the TDNN-F structure and count the lines of weights to prune and do value prune first.&lt;br /&gt;
||&lt;br /&gt;
* finish the assigned chapter of Speech book&lt;br /&gt;
* finish the value prune experiment&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* read some papers about DID and other&lt;br /&gt;
* Run language recognition task&lt;br /&gt;
||&lt;br /&gt;
* Continue to complete the language recognition task&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* do some Literature survey on spoken Emotion Recognition for speech book&lt;br /&gt;
* done for extract d-vector and phone vector for max-margin&lt;br /&gt;
||&lt;br /&gt;
* run max-margin recipe for extracted vector&lt;br /&gt;
* Consider how to implement the network structure&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
*Run through the Thchs30 data, think about the research plan.&lt;br /&gt;
||&lt;br /&gt;
*Get to know every step of the ASR processes, and investigate the RNN LM.&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
*Compared the size of a fully connected layer after tt decomposition through experiment, at the same time compared the train loss, validation loss and test accuracy between the two case. &lt;br /&gt;
||&lt;br /&gt;
*Further compared the inference speed, the precision after retrain.&lt;br /&gt;
*Try to decompose more layers of fully connected layers. &lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Released and published our WeChat app(星云听).&lt;br /&gt;
* cleaned my code &lt;br /&gt;
* wrote document for this project (have not finished yet).&lt;br /&gt;
||&lt;br /&gt;
* continue to write document.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/2019-01-02</id>
		<title>2019-01-02</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/2019-01-02"/>
				<updated>2019-01-03T04:45:19Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Task Tracking (&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yibo Liu&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiuqi Jiang&lt;br /&gt;
|| &lt;br /&gt;
* Made further adjustments to code, deleting unnecessary files and uploading generated samples to dir 'predict/results'.&lt;br /&gt;
* More models have been trained and compared.&lt;br /&gt;
* Collated notes on ML book and uploaded them on wiki.&lt;br /&gt;
||&lt;br /&gt;
* Try to train a model generating length-unfixed texts.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiayao Wu&lt;br /&gt;
|| &lt;br /&gt;
* read some papers about kws and write the chapter before this weekend.&lt;br /&gt;
* carding the TDNN-F structure and count the lines of weights to prune and do value prune first.&lt;br /&gt;
||&lt;br /&gt;
* finish the assigned chapter of Speech book&lt;br /&gt;
* finish the value prune experiment&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhaodi Qi&lt;br /&gt;
|| &lt;br /&gt;
* read some papers about DID and other&lt;br /&gt;
* Run language recognition task&lt;br /&gt;
||&lt;br /&gt;
* Continue to complete the language recognition task&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Jiawei Yu&lt;br /&gt;
|| &lt;br /&gt;
* do some Literature survey on spoken Emotion Recognition for speech book&lt;br /&gt;
* done for extract d-vector and phone vector for max-margin&lt;br /&gt;
||&lt;br /&gt;
* run max-margin recipe for extracted vector&lt;br /&gt;
* Consider how to implement the network structure&lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yunqi Cai&lt;br /&gt;
|| &lt;br /&gt;
*Run through the Thchs30 data, think about the research plan.&lt;br /&gt;
||&lt;br /&gt;
*Get to know every step of the ASR processes, and investigate the RNN LM.&lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Dan He&lt;br /&gt;
|| &lt;br /&gt;
*Compared the size of a fully connected layer after tt decomposition through experiment, at the same time compared the train loss, validation loss and test accuracy between the two case. &lt;br /&gt;
||&lt;br /&gt;
*Further compared the inference speed, the precision after retrain.&lt;br /&gt;
*Try to decompose more layers of fully connected layers. &lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yang Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Released and published our WeChat app(星云听).&lt;br /&gt;
* cleaned my code &lt;br /&gt;
* wrote document for this project (have not finished yet).&lt;br /&gt;
||&lt;br /&gt;
* continue to write document.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E8%AF%BB%E4%B9%A6%E7%AC%94%E8%AE%B0%E2%80%94%E2%80%94%E4%BA%8E%E5%98%89%E5%A8%81.pdf</id>
		<title>文件:读书笔记——于嘉威.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E8%AF%BB%E4%B9%A6%E7%AC%94%E8%AE%B0%E2%80%94%E2%80%94%E4%BA%8E%E5%98%89%E5%A8%81.pdf"/>
				<updated>2019-01-01T12:08:49Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：Yujiawei上传“文件:读书笔记——于嘉威.pdf”的新版本&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Chapter6ppt_yujiawei.pdf</id>
		<title>文件:Chapter6ppt yujiawei.pdf</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Chapter6ppt_yujiawei.pdf"/>
				<updated>2019-01-01T12:01:26Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/ML_book</id>
		<title>ML book</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/ML_book"/>
				<updated>2019-01-01T11:59:33Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Introduction to Modern Machine Learning Technology, by Dong Wang, [http://mlbook.cslt.org Official website].&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Reading notes of interns&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a6/CSLTBOOK读书笔记.pdf   Jiayao Wu (吴嘉瑶)]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/ec/读书笔记——于嘉威.pdf  Jiawei Yu (于嘉威)]&lt;br /&gt;
*[[媒体文件:OUTRAGEOUSLYLARGENEURALNETWORKSTHESPARSELY-GATEDMIXTURE-OF-EXPERTSLAYER.pdf  | San Zhang (张三)]]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/ML_book</id>
		<title>ML book</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/ML_book"/>
				<updated>2019-01-01T11:58:54Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Introduction to Modern Machine Learning Technology, by Dong Wang, [http://mlbook.cslt.org Official website].&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Reading notes of interns&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a6/CSLTBOOK读书笔记.pdf   Jiayao Wu (吴嘉瑶)]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/文件:读书笔记——于嘉威.pdf  Jiawei Yu (于嘉威)]&lt;br /&gt;
*[[媒体文件:OUTRAGEOUSLYLARGENEURALNETWORKSTHESPARSELY-GATEDMIXTURE-OF-EXPERTSLAYER.pdf  | San Zhang (张三)]]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	<entry>
		<id>http://index.cslt.org/mediawiki/index.php/ML_book</id>
		<title>ML book</title>
		<link rel="alternate" type="text/html" href="http://index.cslt.org/mediawiki/index.php/ML_book"/>
				<updated>2019-01-01T11:57:57Z</updated>
		
		<summary type="html">&lt;p&gt;Yujiawei：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Introduction to Modern Machine Learning Technology, by Dong Wang, [http://mlbook.cslt.org Official website].&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Reading notes of interns&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a6/CSLTBOOK读书笔记.pdf   Jiayao Wu (吴嘉瑶)]&lt;br /&gt;
*[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/文件:读书笔记——于嘉威.pdf | Jiawei Yu（于嘉威）]&lt;br /&gt;
*[[媒体文件:OUTRAGEOUSLYLARGENEURALNETWORKSTHESPARSELY-GATEDMIXTURE-OF-EXPERTSLAYER.pdf  | San Zhang (张三)]]&lt;/div&gt;</summary>
		<author><name>Yujiawei</name></author>	</entry>

	</feed>