“Schedule”版本间的差异
来自cslt Wiki
(→Reproduce DSSM Baseline (Chao Xing)) |
(→Work Process) |
||
第38行: | 第38行: | ||
===Deep Poem Processing With Image (Ziwei Bai)=== | ===Deep Poem Processing With Image (Ziwei Bai)=== | ||
+ | : 2016-04-20 :combine my program with Qixin Wang's | ||
: 2016-04-10 : web spider to catch a thousand pices of images. | : 2016-04-10 : web spider to catch a thousand pices of images. | ||
: 2016-04-13 :1、download theano for python2.7。 2.debug cnn.py | : 2016-04-13 :1、download theano for python2.7。 2.debug cnn.py | ||
第45行: | 第46行: | ||
===RNN Music Processing for lyric (Shiyao Li)=== | ===RNN Music Processing for lyric (Shiyao Li)=== | ||
+ | : 2016-04-20 : learn LSTM | ||
: 2016-04-09 : web spider to catch a thousand pieces of lyrics. | : 2016-04-09 : web spider to catch a thousand pieces of lyrics. | ||
: 2016-04-10 : extract the keywords in the lyrics | : 2016-04-10 : extract the keywords in the lyrics | ||
第52行: | 第54行: | ||
===RNN Key word Poem Processing (Yi Xiong)=== | ===RNN Key word Poem Processing (Yi Xiong)=== | ||
+ | : 2016-04-20 : learn web spider | ||
: 2016-04-09 : Database for N-Gram data storing | : 2016-04-09 : Database for N-Gram data storing | ||
: 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation | : 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation | ||
第69行: | 第72行: | ||
===Question & Answering (Aiting Liu)=== | ===Question & Answering (Aiting Liu)=== | ||
+ | : 2016-04-20 : read Fader's paper ()2013 | ||
: 2016-04-15 :learn dssm and sent2vec | : 2016-04-15 :learn dssm and sent2vec | ||
: 2016-04-16 :try to figure out how thePARALAX dataset is constructed | : 2016-04-16 :try to figure out how thePARALAX dataset is constructed | ||
: 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be | : 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be |
2016年4月22日 (五) 01:55的版本
目录
- 1 Text Processing Team Schedule
- 1.1 Members
- 1.2 Work Process
- 1.2.1 Reproduce DSSM Baseline (Chao Xing)
- 1.2.2 Deep Poem Processing With Image (Ziwei Bai)
- 1.2.3 RNN Music Processing for lyric (Shiyao Li)
- 1.2.4 RNN Key word Poem Processing (Yi Xiong)
- 1.2.5 RNN Piano Processing (Jiyuan Zhang)
- 1.2.6 Recommendation System (Tong Liu)
- 1.2.7 Question & Answering (Aiting Liu)
Text Processing Team Schedule
Members
Former Members
- Rong Liu (刘荣) : 优酷
- Xiaoxi Wang (王晓曦) : 图灵机器人
- Xi Ma (马习) : 清华大学研究生
- DongXu Zhang (张东旭) : --
Current Members
- Tianyi Luo (骆天一)
- Chao Xing (邢超)
- Qixin Wang (王琪鑫)
- Yiqiao Pan (潘一桥)
Work Process
Reproduce DSSM Baseline (Chao Xing)
- 2016-04-20 : Find reproduced DSSM model's bug, fix it.
- 2016-04-19 : Code mixture data model by less memory dependency done. Test it's performance.
- 2016-04-18 : Code mixture data model.
- 2016-04-16 : Code mixture data model, but face to memory error. Dr. Wang help me fix it.
- 2016-04-15 : Share Papers. Investigation a series of DSSM papers for future work. And show our intern students how to do research.
: Original DSSM model : Learning Deep Structured Semantic Models for Web Search using Clickthrough Data pdf : CNN based DSSM model : A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval pdf : Use DSSM model for a new area : Modeling Interestingness with Deep Neural Networks pdf : Latest approach for LSTM + RNN DSSM model : SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL pdf
- 2016-04-14 : Test dssm-dnn model, code dssm-cnn model.
Continue investigate deep neural question answering system.
- 2016-04-13 : test dssm model, investigate deep neural question answering system.
: Share theano ppt theano : Share tensorflow ppt tensorflow
- 2016-04-12 : Write done dssm tensor flow version.
- 2016-04-11 : Write tensorflow toolkit ppt for intern student.
- 2016-04-10 : Learn tensorflow toolkit.
- 2016-04-09 : Learn tensorflow toolkit.
- 2016-04-08 : Finish theano version.
Deep Poem Processing With Image (Ziwei Bai)
- 2016-04-20 :combine my program with Qixin Wang's
- 2016-04-10 : web spider to catch a thousand pices of images.
- 2016-04-13 :1、download theano for python2.7。 2.debug cnn.py
- 2016-04-15 :web spider to catch 30 thousands pices of images and store them into a matrix
- 2016-04-16 :modify the code of CNN and spider
- 2016-04-17 :train convouloutional neural network
RNN Music Processing for lyric (Shiyao Li)
- 2016-04-20 : learn LSTM
- 2016-04-09 : web spider to catch a thousand pieces of lyrics.
- 2016-04-10 : extract the keywords in the lyrics
- 2016-04-13 :Read paper Memory Network.
- 2016-04-15 :read the paper Memory Network and start to understand its code
- 2016-04-17 :read paper end to end memory network
RNN Key word Poem Processing (Yi Xiong)
- 2016-04-20 : learn web spider
- 2016-04-09 : Database for N-Gram data storing
- 2016-04-10 : dictionary stored in database , dictionary based segmentation and a simple bigram segmentation
- 2016-04-13 : segmentation result analysis
- 2016-04-15 :improve the simple bigram segmentation
- 2016-04-16 :compare the result of bigram segmentation with dictionary segmentation
- 2016-04-17 :learn python (head first 50%)
RNN Piano Processing (Jiyuan Zhang)
- 2016-4-12:select appropriate midis and run rnnrbm model
- 2016-4-13:view rnnrbm model‘s code
Recommendation System (Tong Liu)
- 2016-04-09 : 1.read a review:Machine learning:Trends,perspectives, and prospects 2.learn python ,can operate dict and set
- 2016-04-12 : 1.read paper Collaborative Deep Learning for Recommender Systems and take notes.2. learn the concepts of stacked denoising autoencoder(SDAE).
- 2016-04-17 :1.allocate PuTTy and Xming 2.learn python, can operate slice and iterator 3.learn release and datasets of a paper: Collaborative Deep Learning for Recommender Systems
Question & Answering (Aiting Liu)
- 2016-04-20 : read Fader's paper ()2013
- 2016-04-15 :learn dssm and sent2vec
- 2016-04-16 :try to figure out how thePARALAX dataset is constructed
- 2016-04-17 :download the PARALAX dataset and turn it into what we want it to be