2013-12-13

来自cslt Wiki
跳转至: 导航搜索

AM development

Sparse DNN

  • Optimal Brain Damage(OBD).
  1. Online OBD held.
  2. OBD + L1 norm start to investigation.
  • Efficient computing
  1. Using MKL and CSR storage does not help much for sparse matrix computation. When the sparsity is 20%, the computing costs 2 times of the original time.
  2. Using matrix splitting can improve computing performance for sparse matrix. Using BSR (block sparse row), when the sparsity is 1/6, the same time cost was obtained.
  3. We can re-arrange the matrix structure and compose zero blocks by some smart approaches, leading to better computing speed.
  4. There is minor difference between the MKL computing and direct computing. This means computing accuracy does not impact the ASR performance very much. This give some rationality for extremely sparse matrix construction.

Efficient DNN training

  1. Moment-based training. NN accuracy decreased with a larger moment, but ASR performance increased (e.g., 0.2).
  2. Asymmetric window: left 20, right 5. NN accuracy increase by 7%.


Engine optimization

  • Investigating LOUDS FST.


LM development

NN LM

  • bigger CSLM with 10240 words output. Performance is better than the separately trained 10 networks (and merge).


Embedded development

  • Embedded stream mode on progress.


Speech QA

  • Class-based QA LM using data from Q db is done.
  • Extract some documents from Baidu know-how that are related to music.
  • Text-based QA. 121/199 correction (with answers). 58 no answers(24 no attributes in db, 27 no recorders). 20 with incorrect answers(5 no answers in the db and so obtained incorrect answers from the web; 8 no recorder and so obtained incorrect answers from the web; 3 db error).
  • Speech-based QA. WER=8.70%. SEE=32.0%. Almost English queries are wrong. Remove English, SEE=27.1%.
  • SP-QA accuracy 45.14% in all the input (18*199).
  • Will try to recover some ASR errors using QA, e.g., pronunciation correction.