Schedule

来自cslt Wiki
2016年12月26日 (一) 01:07Zhangsy讨论 | 贡献的版本

跳转至: 导航搜索

NLP Schedule

Members

Current Members

  • Yang Feng (冯洋)
  • Jiyuan Zhang (张记袁)
  • Aodong Li (李傲冬)
  • Andi Zhang (张安迪)
  • Shiyue Zhang (张诗悦)
  • Li Gu (古丽)
  • Peilun Xiao (肖培伦)

Former Members

  • Chao Xing (邢超)  : FreeNeb
  • Rong Liu (刘荣)  : 优酷
  • Xiaoxi Wang (王晓曦) : 图灵机器人
  • Xi Ma (马习)  : 清华大学研究生
  • Tianyi Luo (骆天一) : phd candidate in University of California Santa Cruz
  • Qixin Wang (王琪鑫)  : MA candidate in University of California
  • DongXu Zhang (张东旭): --
  • Yiqiao Pan (潘一桥) : MA candidate in University of Sydney
  • Shiyao Li (李诗瑶) : BUPT
  • Aiting Liu (刘艾婷)  : BUPT

Work Progress

Daily Report

Date Person start leave hours status
2016/12/1 Andy Zhang 9:30 18:30 8
  • read source code
  • thought about ways to store encoder output
Shiyue Zhang 9:10 19:00 8+
  • draw tsne pictures
Guli 9:10 9:50 12+
  • prepare data and run code
  • prepare for Thesis Report
2016/12/2 Andy Zhang 9:30 18:30 8
  • finish the code to output encoder output but have not tested yet
  • prepared a small data set for testing
Shiyue Zhang
Guli 9:05 18:05 8
  • read code and check data
  • think about how to improve bleu score
Peilun Xiao 10:00 12:30 2+
  • learn basic things of work
2016/12/5 Andy Zhang 9:30 19:00 8+
  • group meeting
  • prepare the same data set as in Theano for Tensorflow
Shiyue Zhang 9:30 19:00 8+
  • discuss with teachers about the tsne pictures
  • group meeting
Guli 9:00 22:35 13+
  • install moses
  • weekly report
Peilun Xiao 9:30 18:30 8
  • learn how to use weka
2016/12/6 Andy Zhang 9:30 19:00 8+
  • run fr-en (single encoder)
  • prepare paraphrase data set, enumerate them, excluding 呀啊哈, etc.
Shiyue Zhang
Guli 9:05 22;30 12+
  • run moses
  • write Thesis Report
Peilun Xiao 9:40 18:30 8
  • read a paper
  • use weka
2016/12/7 Andy Zhang 9:30 19:00 8+
  • prepare paraphrase data set, enumerate them, excluding 呀啊哈, etc.
  • met with some problems, used too much memory, had to restrict enumerate nums to 15000
Shiyue Zhang 9:30 19:00 8+
  • change mem to 28-dim states
  • retry previous experiments
Guli 9:15 22;00 12+
  • run moses
  • write Thesis Report
Peilun Xiao
2016/12/8 Andy Zhang 9:30 19:00 8+
  • finished preparing the data set
  • start working on bidirectional model
Shiyue Zhang 9:30 19:00 8+
  • try more experiments, but all are not good, always converge to baseline
  • draw the tsne picture of test data
  • discuss with Teacher Wang
Guli 9:15 21;15 12
  • run moses
  • write Thesis Report
Peilun Xiao 9:30 17:30 7
  • learn GibbsLda++
2016/12/9 Andy Zhang 9:30 19:00 8+
  • finished coding, added masks for encoder inputs
  • had a some breakthrough with Jiyuan's help, but still meet nan problem
Shiyue Zhang 9:30 19:00 8+
  • return to 384-dim mem, which has 0.07 improvement than baseline
  • replace cos with a one-layer neural network, which is also slightly better than baseline
  • discuss with teachers, and find the problem of scale
Guli 9:15 21;30 12 +
  • prepared Thesis Report
Peilun Xiao
2016/12/12 Andy Zhang
Shiyue Zhang
Guli 10:00 19;00 9
  • read paper to solve oov
Peilun Xiao 9:30 19:00 9+
  • use lda to generate document vector
2016/12/13 Andy Zhang 9:30 18:30 8
  • tried to modify the wrong softmax
Shiyue Zhang 9:30 19:00 8+
  • change mem scale to (0, -10000.0...)
  • try previous experiments
Guli 09:30 21;00 11+
  • read papers
Peilun Xiao 9:50 19:00 9
  • use lda to generate document vector
2016/12/14 Andy Zhang 9:30 18:30 8
  • tried to modify the wrong softmax
  • decided to abandon it for now
Shiyue Zhang
Guli 09:20 20;30 11+
  • read papers
  • writing a survey
Peilun Xiao
2016/12/15 Andy Zhang 9:30 18:30 8
  • tried to add bleu scoring into the code, but met with out of memory problem
Shiyue Zhang 9:30 19:00 8+
  • changed scale didn't lead to better results
  • try 1-dim gate, but converge to baseline
Guli 09:20 20;30 11+
  • read papers
  • writing a survey
Peilun Xiao 13:30 19:00 5+
  • use lda to generate document vector
2016/12/16 Andy Zhang 9:20 18:20 8
  • finished adding bleu scoring into the code
Shiyue Zhang 14:00 19:00 5
  • try to only train gate
  • try model similar to attention
Guli 09:30 20;30 11
  • Check the data, modify the error
  • writing a survey
Peilun Xiao
2016/12/19 Andy Zhang 9:30 18:30 8
  • group meeting
  • made some revises on bleu scoring
Shiyue Zhang 9:00 19:00 9
  • review last week work
  • write the report and talk with teachers
  • decide what to do next and meeting
  • try to train gate with true action info
Guli 9:15 20:00 10+
  • weekly meeting
  • conduct comparative test
Peilun Xiao 9:30 19:00 9+
  • weekly meeting
  • code on tf-idf
2016/12/20 Andy Zhang 9:30 19:00 8+
  • dealt with some bugs on bidirectional model
  • coding on getting encoder outputs
Shiyue Zhang 9:30 19:00 8+
  • try to train gate with true actions, which gets the results better than no true action, but still not very good.
  • try to change cos to inner product, which gets better performance than cos.
Guli 9:00 21:50 12+
  • voice tagging
  • write survey
Peilun Xiao 9:50 18:30 8
  • debug the code
  • prepare a small test set
2016/12/21 Andy Zhang 9:20 18:30 8+
  • finishing testing on getting encoder outputs
  • coded to output corresponding source and target sentences, but have not tested yet
Shiyue Zhang
Guli 9:10 21:10 12
  • voice tagging
  • write survey
Peilun Xiao
2016/12/22 Andy Zhang 9:20 18:30 8+
  • decided to make some revises on bleu scoring part
Shiyue Zhang 9:30 19:00 8+
  • try to change the scales of one-hot vec, and find >=-5000 is good
Guli 9:20 21:10 11+
  • voice tagging
  • write survey
Peilun Xiao 9:20 19:00 9
  • read paper
  • try to solve the bug
2016/12/23 Andy Zhang 9:30 18:10 7+
  • finished bleu scoring part
  • made it process by batches & testing several checkpoints and save the best one
Shiyue Zhang 14:30 19:00 5.5
  • review the work of the whole week and discuss with teachers
  • try joint training
Guli 9:20 21:30 12+
  • voice tagging
  • write survey
Peilun Xiao

Time Off Table

Date Yang Feng Jiyuan Zhang

Past progress

nlp-progress 2016/11

nlp-progress 2016/10

nlp-progress 2016/09

nlp-progress 2016/08

nlp-progress 2016/05-07

nlp-progress 2016/04