“RNN test”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
RNNLM Rescore
Lr讨论 | 贡献
Test
 
(相同用户的26个中间修订版本未显示)
第1行: 第1行:
==wsj Test==
+
==tool==
 +
*LTSM/RNN training, GPU&deep supported [http://sourceforge.net/projects/currennt/]
 +
* RNNLM: RNN LM toolkit [http://www.fit.vutbr.cz/~imikolov/rnnlm/]
 +
* RWTHLM: RNN LTSM toolkit [http://www-i6.informatik.rwth-aachen.de/web/Software/rwthlm.php]
 +
* nplm: NN LM, large scale data [http://nlg.isi.edu/software/nplm/]
 +
* RNN toolkit from microsoft [http://research.microsoft.com/en-us/projects/rnn/]
 +
* cslm [http://www-lium.univ-lemans.fr/~cslm/]
  
=== wsj_data ===
+
==paper==
*Data
+
*[[14-9-30]]
:* size:200M,npdata
+
*[[2014-10-9]]
*parameter
+
      rand_seed=1
+
      nwords=10000 # This is how many words we're putting in the vocab of the RNNLM.
+
      hidden=320
+
      class=300 # Num-classes... should be somewhat larger than sqrt of nwords.
+
      direct=2000 # Number of weights that are used for "direct" connections, in millions.
+
      rnnlm_ver=rnnlm-0.3e # version of RNNLM to use
+
      threads=1 # for RNNLM-HS
+
      bptt=2 # length of BPTT unfolding in RNNLM
+
      bptt_block=20 # length of BPTT unfolding in RNNLM
+
*Train RNNLM set
+
  
{| border="2px"
+
==Steps==
|+ Train Set Environment
+
[[process dict and data]]
|-
+
==Test==
! Parameters  !! hidden !! class !! direct !! bbt !! bptt_block !! threads !!direct-order!!rand_seed!!nwords!!time(min)
+
*[[wsj_data]]
|-
+
*[[chinese_data_gigword]]
!set1
+
*[[jt-chinese]]
| 320 || 300 || 2000 || 2 || 20 || 1 || 4 || 1 || 10000||3380(56h)
+
|-
+
|}
+
 
+
===RNNLM Rescore===
+
*Acoustic Model
+
** location:/nfs/disk/work/users/zhangzy/work/train_wsj_eng_new/exp/tri4b_dnn_org/decode_eval92_tri4b_dnn_org
+
**
+
*Result
+

2014年11月3日 (一) 06:26的最后版本

tool

  • LTSM/RNN training, GPU&deep supported [1]
  • RNNLM: RNN LM toolkit [2]
  • RWTHLM: RNN LTSM toolkit [3]
  • nplm: NN LM, large scale data [4]
  • RNN toolkit from microsoft [5]
  • cslm [6]

paper

Steps

process dict and data

Test