“2024-02-26”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(17位用户的27个中间修订版本未显示)
第9行: 第9行:
 
* ICME review, almost done.
 
* ICME review, almost done.
 
* MicroMagnetic paper, before final check.
 
* MicroMagnetic paper, before final check.
 
 
||
 
||
 
*
 
*
第20行: 第19行:
 
|Lantian Li
 
|Lantian Li
 
||
 
||
*  
+
* GPU status [https://z1et6d3xtb.feishu.cn/wiki/XGcGwRK5viJmpRkjH9AczIhynCh]
 +
* ASIP-BUPT (CohortTSE, SE-Adapter, SpeakerAug, NeuralScoring)
 +
* Huawei project (Phase 1st)
 
||
 
||
 
*  
 
*  
第31行: 第32行:
 
|Ying Shi
 
|Ying Shi
 
||  
 
||  
*
+
* [https://z1et6d3xtb.feishu.cn/wiki/TVspwsNXIiCMfUkep3VcFeecnkc?from=from_copylink  Prepare for INTERSPEECH PAPER]
 +
* Utilizing a Shallow CTC Loss to Permute the Outputs in Multi-Talker ASR
 +
** CR-SOT /  CT-SOT-pretrain-fix / CR-SOT pretrain joint / FIFO-SOT / PIT-SOT
 +
** testing [ in progress ]
 
||
 
||
 
*  
 
*  
第42行: 第46行:
 
|Zhenghai You
 
|Zhenghai You
 
||  
 
||  
*  
+
* Some experiments validate cohort
 
||
 
||
 
*  
 
*  
第52行: 第56行:
 
|Junming Yuan
 
|Junming Yuan
 
||  
 
||  
*
+
* MT-pretraining double check exp + extend exp[https://z1et6d3xtb.feishu.cn/docx/GugFdH32cofRWJxzrzrcNbOZnSc]
 +
** Identified the influence with the BN layer in 10-shot/5-shot exp.
 +
** Extend a new pretrained model(training on clean data with BCE loss)
 +
** Report performance differences on fixing difference layers in finetuning.(after group meeting)
 
||
 
||
*
+
* is24 paper writing
 
||
 
||
 
*   
 
*   
第63行: 第70行:
 
|Chen Chen
 
|Chen Chen
 
||  
 
||  
*  
+
* reproduce robustness experiments [https://z1et6d3xtb.feishu.cn/docx/MHCJdOUqEo9HEixZLm1cF9Z1nzd?from=from_copylink]
 
||
 
||
*  
+
* is24 paper
 
||
 
||
 
*   
 
*   
第74行: 第81行:
 
|Xiaolou Li
 
|Xiaolou Li
 
||  
 
||  
*  
+
* robustness experiments of AVSR system
 +
** white noise and pink noise experiment
 +
** reproduce RealForensics
 
||
 
||
*  
+
* is24 paper writing
 
||
 
||
 
*   
 
*   
第85行: 第94行:
 
|Zehua Liu
 
|Zehua Liu
 
||  
 
||  
*
+
*is24paper writing
 
||
 
||
*  
+
* reproduce LipF
 
||
 
||
 
*   
 
*   
第96行: 第105行:
 
|Pengqi Li
 
|Pengqi Li
 
||   
 
||   
*  
+
*[https://z1et6d3xtb.feishu.cn/docx/T3U2dTs5poiIgtxtM2Sc0QennWe] Attention supervise learning with Liuhuan
 +
** Confirm code for train step
 +
** But performance is not better than without supervise
 +
** Assume and Analysis
 +
*Jinfu and Xueying summarized previous work
 
||
 
||
 
*
 
*
第107行: 第120行:
 
|Wan Lin
 
|Wan Lin
 
||  
 
||  
*  
+
* Neural Scoring [https://z1et6d3xtb.feishu.cn/docx/TQvWdk8LVo9ONaxQ5Qac9A2Dn3d?from=from_copylink]
 
||
 
||
 
*
 
*
第118行: 第131行:
 
|Tianhao Wang
 
|Tianhao Wang
 
||  
 
||  
*  
+
* SE Adapter assumption verification exps [https://z1et6d3xtb.feishu.cn/wiki/HVsDwiEhRiOfBwkpDeocHkxGnic]
 +
** assumption: entire fine-tuning = CNN refinement + SE adaptation
 
||
 
||
 
*  
 
*  
第129行: 第143行:
 
|Zhenyu Zhou
 
|Zhenyu Zhou
 
||  
 
||  
*
+
*Extensive Speaker Pertubation[https://z1et6d3xtb.feishu.cn/docx/DViBdvm8KoQMMXxMXC0cWp2vnPf]:
 +
**VTLP results in cn1&vox1
 +
**VTLP+Speed results in cn1&vox1
 +
**Future Experiment Design
 
||
 
||
 
*
 
*
第140行: 第157行:
 
|Junhui Chen
 
|Junhui Chen
 
||
 
||
*  
+
* Weekly report
 +
* Neural scoring code debug
 
||
 
||
 
*
 
*
第151行: 第169行:
 
|Jiaying Wang
 
|Jiaying Wang
 
||  
 
||  
*  
+
* experiments of cohort pit[https://z1et6d3xtb.feishu.cn/docx/A8TVdMe2foXKtOxgdyScnufZn5e]
 +
** result comparison with other cohort choices with train-100 training set
 
||
 
||
 
*  
 
*  
第162行: 第181行:
 
|Yu Zhang
 
|Yu Zhang
 
||
 
||
*  
+
* financial-pipeline
 +
** portfolio analysis code
 +
** write doc
 
||
 
||
*
+
* Fix some bugs found while self checking
 +
* Check out the entire process with Jun Wang
 
||
 
||
 
*   
 
*   
第173行: 第195行:
 
|Wenqiang Du
 
|Wenqiang Du
 
||  
 
||  
*  
+
* Project  coordination and  related  file archiving
 +
* Closing of the DiTing project
 
||
 
||
 
*
 
*
第184行: 第207行:
 
|Yang Wei
 
|Yang Wei
 
||  
 
||  
*  
+
* Review some FreeNeb release directories for reference
 +
* Concurrence performance problem for Huilan ASR
 
||
 
||
 
*
 
*
第194行: 第218行:
 
|Lily
 
|Lily
 
||
 
||
*  
+
* Interspeech2024[https://www.overleaf.com/project/65c1dcd7d7836d49e2359d3f]
 +
* Journal paper outline[https://z1et6d3xtb.feishu.cn/docx/HHjVdsUfeoPPtYx9rt6c5MRmnGd?from=from_copylink]
 
||
 
||
*
+
* Learn Principles of data-sharing on CHILDED
 +
* Check Coco's annotation
 +
* transfer journal paper from rubbish to little sth.
 
||
 
||
 
*   
 
*   
 
|-
 
|-

2024年2月26日 (一) 12:25的最后版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • Uyghur database paper, draft done.
  • ICME review, almost done.
  • MicroMagnetic paper, before final check.
Lantian Li
  • GPU status [1]
  • ASIP-BUPT (CohortTSE, SE-Adapter, SpeakerAug, NeuralScoring)
  • Huawei project (Phase 1st)
Ying Shi
  • Prepare for INTERSPEECH PAPER
  • Utilizing a Shallow CTC Loss to Permute the Outputs in Multi-Talker ASR
    • CR-SOT / CT-SOT-pretrain-fix / CR-SOT pretrain joint / FIFO-SOT / PIT-SOT
    • testing [ in progress ]
Zhenghai You
  • Some experiments validate cohort
Junming Yuan
  • MT-pretraining double check exp + extend exp[2]
    • Identified the influence with the BN layer in 10-shot/5-shot exp.
    • Extend a new pretrained model(training on clean data with BCE loss)
    • Report performance differences on fixing difference layers in finetuning.(after group meeting)
  • is24 paper writing
Chen Chen
  • reproduce robustness experiments [3]
  • is24 paper
Xiaolou Li
  • robustness experiments of AVSR system
    • white noise and pink noise experiment
    • reproduce RealForensics
  • is24 paper writing
Zehua Liu
  • is24paper writing
  • reproduce LipF
Pengqi Li
  • [4] Attention supervise learning with Liuhuan
    • Confirm code for train step
    • But performance is not better than without supervise
    • Assume and Analysis
  • Jinfu and Xueying summarized previous work
Wan Lin
  • Neural Scoring [5]
Tianhao Wang
  • SE Adapter assumption verification exps [6]
    • assumption: entire fine-tuning = CNN refinement + SE adaptation
Zhenyu Zhou
  • Extensive Speaker Pertubation[7]:
    • VTLP results in cn1&vox1
    • VTLP+Speed results in cn1&vox1
    • Future Experiment Design
Junhui Chen
  • Weekly report
  • Neural scoring code debug
Jiaying Wang
  • experiments of cohort pit[8]
    • result comparison with other cohort choices with train-100 training set
Yu Zhang
  • financial-pipeline
    • portfolio analysis code
    • write doc
  • Fix some bugs found while self checking
  • Check out the entire process with Jun Wang
Wenqiang Du
  • Project coordination and related file archiving
  • Closing of the DiTing project
Yang Wei
  • Review some FreeNeb release directories for reference
  • Concurrence performance problem for Huilan ASR
Lily
  • Interspeech2024[9]
  • Journal paper outline[10]
  • Learn Principles of data-sharing on CHILDED
  • Check Coco's annotation
  • transfer journal paper from rubbish to little sth.