“ASR Status Report 2017-2-13”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(2位用户的2个中间修订版本未显示)
第33行: 第33行:
 
*  
 
*  
 
||  
 
||  
*  
+
* Joint training of chinese and japanese
 +
**To find whether joint training work on this database
 
|-
 
|-
  
第40行: 第41行:
 
|Ying Shi   
 
|Ying Shi   
 
||  
 
||  
*  
+
* joint traing(speech spker)baseline and read some papers
 
||  
 
||  
*  
+
* visualization joint training
 +
**DNN same amount of label
 
|-
 
|-
  
第60行: 第62行:
 
|Lantian Li   
 
|Lantian Li   
 
||  
 
||  
*  
+
* Deep speaker
 +
* ASVspoofing
 +
* Write book
 
||  
 
||  
 
* Deep speaker embedding:1、memory allocation,paramW sharing.  2、better than cosine distance while still worse than LDA and PLDA
 
* Deep speaker embedding:1、memory allocation,paramW sharing.  2、better than cosine distance while still worse than LDA and PLDA

2017年2月13日 (一) 05:48的最后版本


Date People Last Week This Week
2017.2.13


Jingyi Lin
Yanqing Wang
Hang Luo
  • Joint training of chinese and japanese
    • To find whether joint training work on this database
Ying Shi
  • joint traing(speech spker)baseline and read some papers
  • visualization joint training
    • DNN same amount of label
Yixiang Chen
  • ASVspoofing
  • Deep speaker embedding Two methods of improvement
Lantian Li
  • Deep speaker
  • ASVspoofing
  • Write book
  • Deep speaker embedding:1、memory allocation,paramW sharing. 2、better than cosine distance while still worse than LDA and PLDA
  • ASVspoofing:a text-dependent task
  • Write book:complete chapter 1-3,leaving chapter 4.
Zhiyuan Tang
  • joint training of speech and language recognition, two languages for preliminary exploration