“NLP Status Report 2016-12-12”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(2位用户的3个中间修订版本未显示)
第2行: 第2行:
 
!Date !! People !! Last Week !! This Week
 
!Date !! People !! Last Week !! This Week
 
|-
 
|-
| rowspan="5"|2016/12/05
+
| rowspan="6"|2016/12/12
 
|Yang Feng ||
 
|Yang Feng ||
 
*[[s2smn:]] installed tensorflow and ran a toy example (solved problems: version conflict and memory exhausted)
 
*[[s2smn:]] installed tensorflow and ran a toy example (solved problems: version conflict and memory exhausted)
第8行: 第8行:
 
*[[Huilan:]] prepared for periodical report and system submission.
 
*[[Huilan:]] prepared for periodical report and system submission.
 
||
 
||
*[[s2smn:]] finish the code of mn
+
*[[s2smn:]] finish the manual of nmt tensorflow
*[[poemgen:]] experiment design
+
 
*[[Huilan:]] system submission
 
*[[Huilan:]] system submission
 
|-
 
|-
第27行: 第26行:
 
|-
 
|-
 
|Shiyue Zhang ||  
 
|Shiyue Zhang ||  
 +
* finished tsne pictures, and discussed with teachers
 +
* tried experiments with 28-dim mem, but found almost all of them converged to baseline
 +
* returned to 384-dim mem, which is still slightly better than basline.
 +
* found the problem of action mem, one-hot vector is not proper.
 +
* [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/2/2f/RNNG%2Bmm%E5%AE%9E%E9%AA%8C%E6%8A%A5%E5%91%8A.pdf report]]
 
||
 
||
 +
* change one-hot vector to (0, -10000.0, -10000.0...)
 +
* try 1-dim gate
 +
* try max cos
 
|-
 
|-
 
|Guli ||
 
|Guli ||
第34行: 第41行:
 
||
 
||
 
*read papers about Transfer learning and solving OOV  
 
*read papers about Transfer learning and solving OOV  
 +
|-
 +
|Peilun Xiao ||
 +
*Read a paper about document classification wiht GMM distributions of word vecotrs and try to code it in python
 +
*Use LDA to reduce the dimension of the text in r52、r8 and contrast the performance of classification
 +
||
 +
*Use LDA to reduce the dimension of the text in 20news and webkb
 
|}
 
|}

2016年12月19日 (一) 00:14的最后版本

Date People Last Week This Week
2016/12/12 Yang Feng
  • s2smn: installed tensorflow and ran a toy example (solved problems: version conflict and memory exhausted)
  • wrote the code of the memory network part
  • Huilan: prepared for periodical report and system submission.
  • s2smn: finish the manual of nmt tensorflow
  • Huilan: system submission
Jiyuan Zhang
  • attempted to use memory model to improve the atten model of bad effect
  • With the vernacular as the input,generated poem by local atten model[1]
  • Modified working mechanism of memory model(top1 to average)
  • help andi
  • improve poem model
Andi Zhang
  • prepared a paraphrase data set that is enumerated from a previous one (ignoring words like "啊呀哈")
  • worked on coding bidirectional model under tensorflow, met with NAN problem
  • ignore NAN problem for now, run it on the same data set used in Theano
Shiyue Zhang
  • finished tsne pictures, and discussed with teachers
  • tried experiments with 28-dim mem, but found almost all of them converged to baseline
  • returned to 384-dim mem, which is still slightly better than basline.
  • found the problem of action mem, one-hot vector is not proper.
  • [report]
  • change one-hot vector to (0, -10000.0, -10000.0...)
  • try 1-dim gate
  • try max cos
Guli
  • install and run moses
  • prepare thesis report
  • read papers about Transfer learning and solving OOV
Peilun Xiao
  • Read a paper about document classification wiht GMM distributions of word vecotrs and try to code it in python
  • Use LDA to reduce the dimension of the text in r52、r8 and contrast the performance of classification
  • Use LDA to reduce the dimension of the text in 20news and webkb