“ASR Status Report 2016-11-21”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(4位用户的18个中间修订版本未显示)
第1行: 第1行:
 
{| class="wikitable"
 
{| class="wikitable"
!Date!!People !! Last Week !! This Week
+
! Date!!People !! Last Week !! This Week
 
+
 
+
 
+
 
|-
 
|-
| rowspan="5"|2016.11.14
+
| rowspan="5"|2016.11.21
 
|Hang Luo   
 
|Hang Luo   
 
||
 
||
*   
+
Explore the language recognition models including:
 +
*  Evaluate the model in the aspect of sentence and frame, find the accuracy is very high.
 +
*  Minimize the language model, train it single and joint with speech model, evaluate its result.
 
||  
 
||  
*   
+
Continue doing the basic explore of joint training.
*   
+
Read paper about multi-language recognition models and others.
 
|-  
 
|-  
  
第23行: 第22行:
 
*  change the size or word list and corpus this method not worked very well
 
*  change the size or word list and corpus this method not worked very well
 
* prune the LM .And the parameter been used to prune the LM is 2e-7 the size of LM reduce from 290M to 60M but the result about wer is very poor
 
* prune the LM .And the parameter been used to prune the LM is 2e-7 the size of LM reduce from 290M to 60M but the result about wer is very poor
* I have upload some result about several experiment to CVSS (I ought to put a link here but I modify the page is my dormitory so I can't visit CVSS so...)
+
* I have upload some result about several experiment to CVSS[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=576]
 
||  
 
||  
* there are too much private affairs so the job about visualization last week has been delayed I will try my best to finish it the week
+
* there are too much private affairs about myself so the job about visualization last week has been delayed I will try my best to finish it the week
  
  
第33行: 第32行:
 
|Yixiang Chen   
 
|Yixiang Chen   
 
||  
 
||  
*
+
* Learn MFCC extraction mechanism.
*  
+
* Read kaldi computer-feature code and find how to change MFCC.
 +
* Frequency-weighting based feature extraction.
 
||  
 
||  
*  
+
* Continue replay detection (Freq-Weighting and Freq-Warping).
 
|-
 
|-
  
第43行: 第43行:
 
|Lantian Li   
 
|Lantian Li   
 
||  
 
||  
*  
+
* Joint-training on SRE and LRE (LRE task). [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=574]
*  
+
** Tdnn is better than LSTM.
*  
+
** LRE is a long-term task.
*  
+
* Briefly overview Interspeech SRE-related papers.
 +
* CSLT-Replay detection.
 +
** Baseline done (Freq / Mel domain).
 +
** performance-driven based Freq-Weighting and Freq-Warping --> Yixiang.
 
||  
 
||  
*  
+
* LRE task.
*  
+
* Replay detection.
 
|-
 
|-
  
第56行: 第59行:
 
|Zhiyuan Tang  
 
|Zhiyuan Tang  
 
||  
 
||  
*
+
* report for Weekly Reading (a brief review of interspeech16), just prepared;
*
+
* language scores as decoding mask (1.multiply probability, very bad; 2.add log-softmax, a little bad)
 +
* training with mask failed
 
||  
 
||  
*  
+
* training with shared layers;
*
+
* explore single tasks.
 
|}
 
|}
  
第69行: 第73行:
 
{| class="wikitable"
 
{| class="wikitable"
 
!Date!!People !! Last Week !! This Week
 
!Date!!People !! Last Week !! This Week
 
 
 
 
|-
 
|-
| rowspan="5"|2016.11.07
+
| rowspan="5"|2016.11.14
 
|Hang Luo   
 
|Hang Luo   
 
||
 
||
make a report of 2 Inter Speech papers  
+
read papers about highway connection and multi-task
*  run joint training experiments and the result is [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=luohang&step=view_request&cvssid=578]
+
 
||  
 
||  
read papers about highway connection and multi-task.
+
Explore the language recognition model on speech+language joint training, find how to use languange information.
 +
*  finish ML-book
 
|-  
 
|-  
  
第87行: 第88行:
 
|Ying Shi   
 
|Ying Shi   
 
||  
 
||  
paper reading kazak speech recognition data prep
+
kazaka recognition baseline finished [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=shiying&step=view_request&cvssid=576  here]
 
||  
 
||  
baseline of kazak speech recoginition
+
figuie of ml-book read paper nn visualization
 
|-
 
|-
 +
  
  
第96行: 第98行:
 
|Yixiang Chen   
 
|Yixiang Chen   
 
||  
 
||  
*  
+
* Motify the ML-book and read paper.
 +
* Prepare the replay detection baseline.[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=571]
 
||  
 
||  
*  
+
* Complete the replay baseline and attempt to modify MFCC calculation.
 
|-
 
|-
 
 
  
  
第107行: 第108行:
 
|Lantian Li   
 
|Lantian Li   
 
||  
 
||  
*  
+
* Complete the Joint-training on TASLP (speaker parts). [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=573]
 +
* Joint-training on SRE and LRE (Still over-fitting !). [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=574]
 +
* Read some papers and download four database. [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Data_resources]
 +
* CSLT-Replay detection database is OK! [/work4/lilt/Replay]
 
||  
 
||  
*  
+
* Joint-training on SRE and LRE.
 +
* Baseline system on replay detection.
 
|-
 
|-
  
第116行: 第121行:
 
|Zhiyuan Tang  
 
|Zhiyuan Tang  
 
||  
 
||  
* almost finished the additinal experiments of joint learning (speech & spk) for taslp (multi-task, ivector as part of input)[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=570].  
+
* finished the additinal experiments of joint learning (speech & spk) for taslp (multi-task, ivector as part of input)[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=570].
 +
* prepare a brief review of interspeech16.
 
||  
 
||  
* report for Weekly Reading;
+
* report for Weekly Reading (a brief review of interspeech16);
* joint training for bilingual.
+
* joint training for bilingual: language scores as decoding mask, explore the best info receivier by studying single tasks with extra info.
 
|}
 
|}

2016年11月28日 (一) 01:13的最后版本

Date People Last Week This Week
2016.11.21 Hang Luo
  • Explore the language recognition models including:
  • Evaluate the model in the aspect of sentence and frame, find the accuracy is very high.
  • Minimize the language model, train it single and joint with speech model, evaluate its result.
  • Continue doing the basic explore of joint training.
  • Read paper about multi-language recognition models and others.
Ying Shi
  • fighting with kazak speech recognition system:because the huge size of HCLG.fst the decoding job always make the sever done.

There are several method I have tried

  • change the size or word list and corpus this method not worked very well
  • prune the LM .And the parameter been used to prune the LM is 2e-7 the size of LM reduce from 290M to 60M but the result about wer is very poor
  • I have upload some result about several experiment to CVSS[1]
  • there are too much private affairs about myself so the job about visualization last week has been delayed I will try my best to finish it the week



Yixiang Chen
  • Learn MFCC extraction mechanism.
  • Read kaldi computer-feature code and find how to change MFCC.
  • Frequency-weighting based feature extraction.
  • Continue replay detection (Freq-Weighting and Freq-Warping).
Lantian Li
  • Joint-training on SRE and LRE (LRE task). [2]
    • Tdnn is better than LSTM.
    • LRE is a long-term task.
  • Briefly overview Interspeech SRE-related papers.
  • CSLT-Replay detection.
    • Baseline done (Freq / Mel domain).
    • performance-driven based Freq-Weighting and Freq-Warping --> Yixiang.
  • LRE task.
  • Replay detection.
Zhiyuan Tang
  • report for Weekly Reading (a brief review of interspeech16), just prepared;
  • language scores as decoding mask (1.multiply probability, very bad; 2.add log-softmax, a little bad)
  • training with mask failed
  • training with shared layers;
  • explore single tasks.




Date People Last Week This Week
2016.11.14 Hang Luo
  • read papers about highway connection and multi-task
  • Explore the language recognition model on speech+language joint training, find how to use languange information.
  • finish ML-book
Ying Shi
  • kazaka recognition baseline finished here
  • figuie of ml-book read paper nn visualization
Yixiang Chen
  • Motify the ML-book and read paper.
  • Prepare the replay detection baseline.[3]
  • Complete the replay baseline and attempt to modify MFCC calculation.
Lantian Li
  • Complete the Joint-training on TASLP (speaker parts). [4]
  • Joint-training on SRE and LRE (Still over-fitting !). [5]
  • Read some papers and download four database. [6]
  • CSLT-Replay detection database is OK! [/work4/lilt/Replay]
  • Joint-training on SRE and LRE.
  • Baseline system on replay detection.
Zhiyuan Tang
  • finished the additinal experiments of joint learning (speech & spk) for taslp (multi-task, ivector as part of input)[7].
  • prepare a brief review of interspeech16.
  • report for Weekly Reading (a brief review of interspeech16);
  • joint training for bilingual: language scores as decoding mask, explore the best info receivier by studying single tasks with extra info.