“Dong Wang-1209”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以内容“1. re-design heterogeneous experiments, using shorter test. Results looks ok.http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Discriminative_power here”创建新页面)
 
 
第1行: 第1行:
 
1. re-design heterogeneous experiments, using shorter test. Results looks ok.[[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Discriminative_power here]]
 
1. re-design heterogeneous experiments, using shorter test. Results looks ok.[[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Discriminative_power here]]
 +
 +
2. work with javi., Ravi to set up chime experiments. Hopefully we can meet the deadline of the Eurosipcal deadline on 10/15.
 +
 +
3. kick off the MIND project, the first phase looks ok. We have finalized the design and functional spec, we have start to coding with jsoap.
 +
 +
4. work with Qi jun on the IASCA paper. deadline extended to 10/05. should be ok.
 +
 +
5. work on the word-based LM. The results looks OK on 863 (85% accuracy) [[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Gigabye_LM here]]
 +
 +
6. work with Chao on the web demo;  particularly the stream way of recognition.  Now looks fine, 1 second per sentence.
 +
 +
7. discuss with Conexant and sumavision any possibilities to collaborate.
 +
 +
8. work with Guozhen on psychological event detection. The results with sample data by SVM look fine, 70% frame accuracy and clear discrimination of binary states.

2012年9月28日 (五) 03:34的最后版本

1. re-design heterogeneous experiments, using shorter test. Results looks ok.[here]

2. work with javi., Ravi to set up chime experiments. Hopefully we can meet the deadline of the Eurosipcal deadline on 10/15.

3. kick off the MIND project, the first phase looks ok. We have finalized the design and functional spec, we have start to coding with jsoap.

4. work with Qi jun on the IASCA paper. deadline extended to 10/05. should be ok.

5. work on the word-based LM. The results looks OK on 863 (85% accuracy) [here]

6. work with Chao on the web demo; particularly the stream way of recognition. Now looks fine, 1 second per sentence.

7. discuss with Conexant and sumavision any possibilities to collaborate.

8. work with Guozhen on psychological event detection. The results with sample data by SVM look fine, 70% frame accuracy and clear discrimination of binary states.