“Zhiyuan Tang 2015-08-31”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(相同用户的2个中间修订版本未显示)
第4行: 第4行:
 
1. got some results on experiments on WSJ (bidirectional, more layers), seemed that more layers wouldn't help and  
 
1. got some results on experiments on WSJ (bidirectional, more layers), seemed that more layers wouldn't help and  
  
  basic bidirectional neither, while pre-trained bidirectional one looked better;
+
basic bidirectional neither, while pre-trained bidirectional one looked better;
  
 
2. got a glimpse of the capability of end-to-end ASR with B-LSTM on bigger data (1000+ hours), more to be waited;
 
2. got a glimpse of the capability of end-to-end ASR with B-LSTM on bigger data (1000+ hours), more to be waited;
第13行: 第13行:
 
This week:
 
This week:
  
1. got the result of fine-tuned bidirectional net on WSJ with dark knowledge, then conclude the experiments;  
+
1. get the result of fine-tuned bidirectional net on WSJ with dark knowledge, then conclude the experiments;  
  
2. get more results of end-to-end ASR with B-LSTM on bigger data (1000+ hours)
+
2. get more results of end-to-end ASR with B-LSTM on bigger data (1000+ hours);
 +
 
 +
3. some document/paper work.

2015年8月31日 (一) 14:04的最后版本

Last week:

1. got some results on experiments on WSJ (bidirectional, more layers), seemed that more layers wouldn't help and

basic bidirectional neither, while pre-trained bidirectional one looked better;

2. got a glimpse of the capability of end-to-end ASR with B-LSTM on bigger data (1000+ hours), more to be waited;

3. revised the Chinese paper on Pronounciation Vector (following Language Vector).


This week:

1. get the result of fine-tuned bidirectional net on WSJ with dark knowledge, then conclude the experiments;

2. get more results of end-to-end ASR with B-LSTM on bigger data (1000+ hours);

3. some document/paper work.