“Zhiyuan Tang 2015-08-31”版本间的差异
来自cslt Wiki
(以“test”为内容创建页面) |
|||
第1行: | 第1行: | ||
− | + | ||
+ | Last week: | ||
+ | |||
+ | 1. got some results on experiments on WSJ (bidirectional, more layers), seemed that more layers wouldn't help and | ||
+ | |||
+ | basic bidirectional neither, while pre-trained bidirectional one looked better; | ||
+ | |||
+ | 2. got a glimpse of the capability of end-to-end ASR with B-LSTM on bigger data (1000+ hours), more to be waited; | ||
+ | |||
+ | 3. revised the Chinese paper on Pronounciation Vector (following Language Vector). | ||
+ | |||
+ | |||
+ | This week: | ||
+ | |||
+ | 1. got the result of fine-tuned bidirectional net on WSJ with dark knowledge, then conclude the experiments; | ||
+ | |||
+ | 2. get more results of end-to-end ASR with B-LSTM on bigger data (1000+ hours) |
2015年8月31日 (一) 14:01的版本
Last week:
1. got some results on experiments on WSJ (bidirectional, more layers), seemed that more layers wouldn't help and
basic bidirectional neither, while pre-trained bidirectional one looked better;
2. got a glimpse of the capability of end-to-end ASR with B-LSTM on bigger data (1000+ hours), more to be waited;
3. revised the Chinese paper on Pronounciation Vector (following Language Vector).
This week:
1. got the result of fine-tuned bidirectional net on WSJ with dark knowledge, then conclude the experiments;
2. get more results of end-to-end ASR with B-LSTM on bigger data (1000+ hours)