“NLP Status Report 2016-12-12”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第27行: 第27行:
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
 +
* finished tsne pictures, and discussed with teachers
 +
* tried experiments with 28-dim mem, but found almost all of them converged to baseline
 +
* returned to 384-dim mem, which is still slightly better than basline.
 +
* found the problem of action mem, one-hot vector is not proper.
 
||
 
||
 +
* change one-hot vector to (0, -10000.0, -10000.0...)
 +
* try 1-dim gate
 +
* try max cos
 
|-
 
|-
 
|Guli ||
 
|Guli ||

2016年12月14日 (三) 11:36的版本

Date People Last Week This Week
2016/12/05 Yang Feng
  • s2smn: installed tensorflow and ran a toy example (solved problems: version conflict and memory exhausted)
  • wrote the code of the memory network part
  • Huilan: prepared for periodical report and system submission.
Jiyuan Zhang
  • attempted to use memory model to improve the atten model of bad effect
  • With the vernacular as the input,generated poem by local atten model[1]
  • Modified working mechanism of memory model(top1 to average)
  • help andi
  • improve poem model
Andi Zhang
  • prepared a paraphrase data set that is enumerated from a previous one (ignoring words like "啊呀哈")
  • worked on coding bidirectional model under tensorflow, met with NAN problem
  • ignore NAN problem for now, run it on the same data set used in Theano
Shiyue Zhang
  • finished tsne pictures, and discussed with teachers
  • tried experiments with 28-dim mem, but found almost all of them converged to baseline
  • returned to 384-dim mem, which is still slightly better than basline.
  • found the problem of action mem, one-hot vector is not proper.
  • change one-hot vector to (0, -10000.0, -10000.0...)
  • try 1-dim gate
  • try max cos
Guli
  • install and run moses
  • prepare thesis report
  • read papers about Transfer learning and solving OOV