|
|
第32行: |
第32行: |
| |Ying Shi | | |Ying Shi |
| || | | || |
− | * [https://z1et6d3xtb.feishu.cn/wiki/TVspwsNXIiCMfUkep3VcFeecnkc?from=from_copylink here Prepare for INTERSPEECH PAPER] | + | * [https://z1et6d3xtb.feishu.cn/wiki/TVspwsNXIiCMfUkep3VcFeecnkc?from=from_copylink Prepare for INTERSPEECH PAPER] |
| * Utilizing a Shallow CTC Loss to Permute the Outputs in Multi-Talker ASR | | * Utilizing a Shallow CTC Loss to Permute the Outputs in Multi-Talker ASR |
| ** CR-SOT / CT-SOT-pretrain-fix / CR-SOT pretrain joint / FIFO-SOT / PIT-SOT | | ** CR-SOT / CT-SOT-pretrain-fix / CR-SOT pretrain joint / FIFO-SOT / PIT-SOT |
People |
This Week |
Next Week |
Task Tracking (DeadLine)
|
Dong Wang
|
- Uyghur database paper, draft done.
- ICME review, almost done.
- MicroMagnetic paper, before final check.
|
|
|
Lantian Li
|
- GPU status [1]
- ASIP-BUPT (CohortTSE, SE-Adapter, SpeakerAug, NeuralScoring)
- Huawei project (Phase 1st)
|
|
|
Ying Shi
|
- Prepare for INTERSPEECH PAPER
- Utilizing a Shallow CTC Loss to Permute the Outputs in Multi-Talker ASR
- CR-SOT / CT-SOT-pretrain-fix / CR-SOT pretrain joint / FIFO-SOT / PIT-SOT
- testing [ in progress ]
|
|
|
Zhenghai You
|
- Some experiments validate cohort
|
|
|
Junming Yuan
|
- MT-pretraining double check exp + extend exp[2]
- Identified the influence with the BN layer in 10-shot/5-shot exp.
- Extend a new pretrained model(training on clean data with BCE loss)
- Report performance differences on fixing difference layers in finetuning.(after group meeting)
|
|
|
Chen Chen
|
- reproduce robustness experiments [3]
|
|
|
Xiaolou Li
|
- robustness experiments of AVSR system
- white noise and pink noise experiment
- reproduce RealForensics
|
|
|
Zehua Liu
|
|
|
|
Pengqi Li
|
- [4] Attention supervise learning with Liuhuan
- Confirm code for train step
- But performance is not better than without supervise
- Assume and Analysis
- Jinfu and Xueying summarized previous work
|
|
|
Wan Lin
|
|
|
|
Tianhao Wang
|
- SE Adapter assumption verification exps [6]
- assumption: entire fine-tuning = CNN refinement + SE adaptation
|
|
|
Zhenyu Zhou
|
- Extensive Speaker Pertubation[7]:
- VTLP results in cn1&vox1
- VTLP+Speed results in cn1&vox1
- Future Experiment Design
|
|
|
Junhui Chen
|
- Weekly report
- Neural scoring code debug
|
|
|
Jiaying Wang
|
- experiments of cohort pit[8]
- result comparison with other cohort choices with train-100 training set
|
|
|
Yu Zhang
|
- financial-pipeline
- portfolio analysis code
- write doc
|
- Fix some bugs found while self checking
- Check out the entire process with Jun Wang
|
|
Wenqiang Du
|
- Project coordination and related file archiving
- Closing of the DiTing project
|
|
|
Yang Wei
|
- Review some FreeNeb release directories for reference
- Concurrence performance problem for Huilan ASR
|
|
|
Lily
|
- Interspeech2024[9]
- Journal paper outline[10]
|
|
|