“Tianyi Luo 2016-05-09”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“=== Plan to do this week === * To implement tensorflow version of RNN/LSTM Max margin vector training. * To implement attention chatting model with xiaobing corpus....”为内容创建页面)
 
Lty讨论 | 贡献
 
第7行: 第7行:
 
--------------------2016-05-05
 
--------------------2016-05-05
 
* Finish the preprocessing of Xiaobing corpus.
 
* Finish the preprocessing of Xiaobing corpus.
* Implement the max-margin theano version(Sample training pair is: q1,a1,q2. It means the negative sample is q.).
+
* Implement part of code about the qqa (Current sample is q1; Positive sample is a1; Negtive sample is q2.) max-margin theano version.
 
--------------------2016-05-06
 
--------------------2016-05-06
* Implement the max-margin theano version(Sample training pair is: q1,a1,a2. It means the negative sample is a.).
+
* Finish implementing code about the qqa (Current sample is q1; Positive sample is a1; Negtive sample is q2.) max-margin theano version.
 
--------------------2016-05-07
 
--------------------2016-05-07
 
* Wait for the results of experiments.
 
* Wait for the results of experiments.

2016年5月9日 (一) 10:46的最后版本

Plan to do this week

  • To implement tensorflow version of RNN/LSTM Max margin vector training.
  • To implement attention chatting model with xiaobing corpus.

Work done in this week


2016-05-02~05-04
  • The Holiday of Labors Day.

2016-05-05
  • Finish the preprocessing of Xiaobing corpus.
  • Implement part of code about the qqa (Current sample is q1; Positive sample is a1; Negtive sample is q2.) max-margin theano version.

2016-05-06
  • Finish implementing code about the qqa (Current sample is q1; Positive sample is a1; Negtive sample is q2.) max-margin theano version.

2016-05-07
  • Wait for the results of experiments.
  • Ready for going to Silicon Valley.

2016-05-08
  • Arrived in Silicon Valley.

Plan to do next week

  • To implement tensorflow version of RNN/LSTM Max margin vector training.
  • To implement attention chatting model with xiaobing corpus.

Interested papers

  • Cascading Bandits: Learning to Rank in the Cascade Model(ICML 2015) [pdf]