“ASR Status Report 2017-2-13”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“ {| class="wikitable" !Date!!People !! Last Week !! This Week |- | rowspan="7"|2017.2.13 |Jingyi Lin || * || * |- |- |Yanqing Wang || * || * |- |- |H...”为内容创建页面)
 
 
(3位用户的6个中间修订版本未显示)
第33行: 第33行:
 
*  
 
*  
 
||  
 
||  
*  
+
* Joint training of chinese and japanese
 +
**To find whether joint training work on this database
 
|-
 
|-
  
第40行: 第41行:
 
|Ying Shi   
 
|Ying Shi   
 
||  
 
||  
*  
+
* joint traing(speech spker)baseline and read some papers
 
||  
 
||  
*  
+
* visualization joint training
 +
**DNN same amount of label
 
|-
 
|-
  
第52行: 第54行:
 
*  
 
*  
 
||  
 
||  
*  
+
* ASVspoofing
 +
* Deep speaker embedding Two methods of improvement
 
|-
 
|-
  
第59行: 第62行:
 
|Lantian Li   
 
|Lantian Li   
 
||  
 
||  
*  
+
* Deep speaker
 +
* ASVspoofing
 +
* Write book
 
||  
 
||  
*  
+
* Deep speaker embedding:1、memory allocation,paramW sharing.  2、better than cosine distance while still worse than LDA and PLDA
 +
* ASVspoofing:a text-dependent task
 +
* Write book:complete chapter 1-3,leaving chapter 4.
 
|-
 
|-
  
第68行: 第75行:
 
|Zhiyuan Tang  
 
|Zhiyuan Tang  
 
||  
 
||  
* babel data preparation, baselines[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=595 baselines]
+
* babel data preparation, [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=595 baselines]
 
||  
 
||  
 
* joint training of speech and language recognition, two languages for preliminary exploration
 
* joint training of speech and language recognition, two languages for preliminary exploration

2017年2月13日 (一) 05:48的最后版本


Date People Last Week This Week
2017.2.13


Jingyi Lin
Yanqing Wang
Hang Luo
  • Joint training of chinese and japanese
    • To find whether joint training work on this database
Ying Shi
  • joint traing(speech spker)baseline and read some papers
  • visualization joint training
    • DNN same amount of label
Yixiang Chen
  • ASVspoofing
  • Deep speaker embedding Two methods of improvement
Lantian Li
  • Deep speaker
  • ASVspoofing
  • Write book
  • Deep speaker embedding:1、memory allocation,paramW sharing. 2、better than cosine distance while still worse than LDA and PLDA
  • ASVspoofing:a text-dependent task
  • Write book:complete chapter 1-3,leaving chapter 4.
Zhiyuan Tang
  • joint training of speech and language recognition, two languages for preliminary exploration