Chao Xing 2015-09-28

来自cslt Wiki
跳转至: 导航搜索

Last Week:

Last Week Solution:
  1. DNN program has some problems, fix these & DNN contain same performance to Linear.
     Problem is training process, training samples are equal to speakers' so this will lead model less compute 50% training samples.
Plan to do:
  1. Help Lantian finish his papers.
  2. Run for a small vocabulary size, such as 5000.
Problem:
  1. DNN still not better than linear model.
  

This Week:

 Solution:
  1. DNN with dropout method, and add SAA to reduce randomly properties.
 Plan to do:
  1. Run for a small vocabulary size, such as 5000.
  2. Run and test some ideas.