2013-07-22

来自cslt Wiki
2013年7月22日 (一) 12:27Wangd讨论 | 贡献的版本

(差异) ←上一版本 | 最后版本 (差异) | 下一版本→ (差异)
跳转至: 导航搜索

Data sharing

  • LM count files still undelivered!

DNN progress

Experiments

  • Sparse DNN.


1200-1200-1200-3536 1200-1200-1200-3536-sparse0.3 (sparsity 1/5) original atlas: RT 2.3 RT 2.3 atlas sparse: RT 54 RT 14 NIST smatmat: RT 27.3 RT 5.98


800-800-800-2108 800-800-800-2108-sparse0.3 (sparsity 2/5): original atlas: RT 1.1 RT 1.1 NIST smatmat: RT 11.9 RT 5.5

Conclusions: 1. the atlas works well for both non-sparse and sparse. 2. sparsity does not work if the sparsity rate is low. It looks the sparsity computing can outperform the non-sparsity computing only if the sparsity rate is higher than 1/15. 3. In another words, to employ sparsity, the cost that first should be taken is the error rate increase with the 1/15 compression. 4. The sparse approach seems more useful for storage: if the sparsity is higher than 1/2, then the storage of CSR/CSC will start to save storage. 5. Possibly unit-based sparsity instead of weight sparsity.

Tencent exps

GPU & CPU merge

  1. Hold


Embedded progress

  • Tested various PS models:

ID model feature WER RT storage

semi_10000 semi HMM s2-4x 6.30% 0.80 10.2M semi_5000 semi HMM s2-4x 6.70% 0.74 5.2M semi_5000 semi HMM 1c-d-dd 9.11% 0.91 1.3M ptm_5000 PTM HMM s2-4x 6.47% 2.15 1.3M

So there is not a perfect which wins in terms all the criteria. Looks like semi-5000 is an acceptable trade-off.