“Schedule”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
Ziwei Bai
第59行: 第59行:
  
 
====Ziwei Bai====
 
====Ziwei Bai====
 +
 +
2016-05-11:
 +
            1、read paper“Movie-DiC: a Movie Dialogue Corpus for Research and Development”
 +
            2、reconstruct a new film scripts into our expected formula
  
 
2016-05-08:  convert the pdf we found yesterday into txt,and reconstruct the data into our expected formula   
 
2016-05-08:  convert the pdf we found yesterday into txt,and reconstruct the data into our expected formula   
第65行: 第69行:
  
 
2016-05-04:Finding and dealing with the data for QA system
 
2016-05-04:Finding and dealing with the data for QA system
 
  
 
===Generation Model (Aodong li)===
 
===Generation Model (Aodong li)===

2016年5月11日 (三) 13:24的版本

Text Processing Team Schedule

Members

Former Members

  • Rong Liu (刘荣) : 优酷
  • Xiaoxi Wang (王晓曦) : 图灵机器人
  • Xi Ma (马习) : 清华大学研究生
  • DongXu Zhang (张东旭) : --

Current Members

  • Tianyi Luo (骆天一)
  • Chao Xing (邢超)
  • Qixin Wang (王琪鑫)
  • Yiqiao Pan (潘一桥)
  • Aodong Li (李傲冬)
  • Ziwei Bai (白子薇)
  • Aiting Liu (刘艾婷)

Work Process

Question answering system

Chao Xing

2016-05-11 :

            1. Prepare for paper sharing.
            2. Finish CDSSM model in chatting process.
            3. Start setup model & experiment in dialogue system.

2016-05-10 :

            1. Finish test CDSSM model in chatting, find original data has some problem.
            2. Read paper:
                   A Hierarchical Recurrent Encoder-Decoder for Generative Context-Aware Query Suggestion
                   A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
                   Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models
                   Neural Responding Machine for Short-Text Conversation

2016-05-09 :

            1. Test CDSSM model in chatting model.
            2. Read paper : 
                   Learning from Real Users Rating Dialogue Success with Neural Networks for Reinforcement Learning in Spoken Dialogue Systems
                   SimpleDS A Simple Deep Reinforcement Learning Dialogue System
            3. Code RNN by myself in tensorflow.

2016-05-08 :

            Fix some problem in dialogue system team, and continue read some papers in dialogue system.

2016-05-07 :

            Read some papers in dialogue system.

2016-05-06 :

            Try to fix RNN-DSSM model in tensorflow. Failure..

2016-05-05 :

            Coding for RNN-DSSM in tensorflow. Face an error when running rnn-dssm model in cpu : memory keep increasing. 
            Tensorflow's version in huilan is 0.7.0 and install by pip, this cause using error in creating gpu graph,
            one possible solution is build tensorflow from source code.


Aiting Liu

2016-05-08:Fetch data from 'http://news.ifeng.com/' and 'http://www.xinhuanet.com/'(13.4M)

2016-05-07:Fetch data from 'http://fangtan.china.com.cn/' and interview books (10M)

2016-05-04:Establish the overall framework of our chat robot,and continue to build database

Ziwei Bai

2016-05-11:

            1、read paper“Movie-DiC: a Movie Dialogue Corpus for Research and Development”
            2、reconstruct a new film scripts into our expected formula 

2016-05-08: convert the pdf we found yesterday into txt,and reconstruct the data into our expected formula

2016-05-07: Finding 9 Drama scripts and 20 film scripts

2016-05-04:Finding and dealing with the data for QA system

Generation Model (Aodong li)

2016-05-10 : complete sequence to sequence lstm-based model in Theano 2016-05-09 : try to code sequence to sequence model 2016-05-08 :

   denoise and train word vectors of  Lijun Deng's lyrics (110+ pieces)
   decide on using raw sequence to sequence model

2016-05-07 :

   study attention-based model
   learn some details about the poem generation model
   change my focus onto lyrics generation model

2016-05-06 : read the paper about poem generation and learn about LSTM 2016-05-05 : check in and have an overview of generation model


Past progress

nlp-progress-2016-05

nlp-progress-2016-04