“ASR Status Report 2017-7-17”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第49行: 第49行:
 
* read paralinguistic and challenge of paralinguistic 2009-2017  
 
* read paralinguistic and challenge of paralinguistic 2009-2017  
 
||  
 
||  
*  
+
* share paralinguistic
 
|-
 
|-
  

2017年7月17日 (一) 04:35的版本

Date People Last Week This Week
2017.7.17


Miao Zhang
  • Read the paralinguistic paper and material from Teacher Li
  • work out the recording plan (delayed)
  • work out the recording plan with instruction from Teacher Li
  • check the book of deep learning
Hui Tang
Yanqing Wang
  • check the former conclusions in a narrow network ( not finished yet )
  • read the source code as a preparation for retrain task.
  • finish checking the former conclusions and try to find the applicable conditions.
  • finish the retrain task.
Ying Shi
Yixiang Chen
  • Through voiceprint spectrum synthetic speech
  • read paralinguistic and challenge of paralinguistic 2009-2017
  • share paralinguistic
Lantian Li
Zhiyuan Tang
  • Replace the old version kaldi with new ones. (delayed)
  • Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. (delayed)
  • Replace the old version kaldi with new ones.
  • Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release.




Date People Last Week This Week
2017.7.10


Miao Zhang
  • A report about trivial events
  • Finish the test website with Tanghui
  • Read the paper of Paralinguistics
  • Make a plan for recording and start to record hopefully.
Hui Tang
  • completed the test web site web
  • finished checking the subset of our speech databases (nearly 800 sentences)
  • finish checking the reminder of the databases (nearly 2500 sentences)
Yanqing Wang
  • use different activation function to prune result
  • make the network narrow and test the former conclusions.
  • change source code to retrain the neural network.
Ying Shi
  • help zheling to finish his first crawler program
  • chech hazak speech data
    • train: 1346 utterances' WER large than 20% (utterance level WER)
    • test: 759 utterances' WER large than 20%(utterance level WER)
  • transfer learning based on th30 and wsj (performance is poor)
  • tools for speech data checking
  • transfer learning based on large Chinese ASR model
Yixiang Chen
  • plot for “learning deep speaker features”
  • comprehend Paralinguistics
  • record voice
Lantian Li
  • deep speaker feature
    • segmentation is still not suitable.
    • visualization with the t-sne seems cool.
  • help Zhangzy decode d-vector and re-train a new deep speaker model.
  • more details of segmentation experiments.
  • prepare the weekly meeting.
Zhiyuan Tang
  • Scanned the source code of auto-scoring system;
  • A report about the research of the speech group (Thursday).
  • Replace the old version kaldi with new ones.
  • Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release.