|
|
| (2位用户的2个中间修订版本未显示) |
| 第32行: |
第32行: |
| | |Ying Shi | | |Ying Shi |
| | || | | || |
| − | * [https://z1et6d3xtb.feishu.cn/wiki/TVspwsNXIiCMfUkep3VcFeecnkc?from=from_copylink here Prepare for INTERSPEECH PAPER] | + | * [https://z1et6d3xtb.feishu.cn/wiki/TVspwsNXIiCMfUkep3VcFeecnkc?from=from_copylink Prepare for INTERSPEECH PAPER] |
| | * Utilizing a Shallow CTC Loss to Permute the Outputs in Multi-Talker ASR | | * Utilizing a Shallow CTC Loss to Permute the Outputs in Multi-Talker ASR |
| | ** CR-SOT / CT-SOT-pretrain-fix / CR-SOT pretrain joint / FIFO-SOT / PIT-SOT | | ** CR-SOT / CT-SOT-pretrain-fix / CR-SOT pretrain joint / FIFO-SOT / PIT-SOT |
| 第94行: |
第94行: |
| | |Zehua Liu | | |Zehua Liu |
| | || | | || |
| − | * | + | *is24paper writing |
| | || | | || |
| − | * | + | * reproduce LipF |
| | || | | || |
| | * | | * |
| 第221行: |
第221行: |
| | * Journal paper outline[https://z1et6d3xtb.feishu.cn/docx/HHjVdsUfeoPPtYx9rt6c5MRmnGd?from=from_copylink] | | * Journal paper outline[https://z1et6d3xtb.feishu.cn/docx/HHjVdsUfeoPPtYx9rt6c5MRmnGd?from=from_copylink] |
| | || | | || |
| − | * | + | * Learn Principles of data-sharing on CHILDED |
| | + | * Check Coco's annotation |
| | + | * transfer journal paper from rubbish to little sth. |
| | || | | || |
| | * | | * |
| | |- | | |- |
| People |
This Week |
Next Week |
Task Tracking (DeadLine)
|
| Dong Wang
|
- Uyghur database paper, draft done.
- ICME review, almost done.
- MicroMagnetic paper, before final check.
|
|
|
| Lantian Li
|
- GPU status [1]
- ASIP-BUPT (CohortTSE, SE-Adapter, SpeakerAug, NeuralScoring)
- Huawei project (Phase 1st)
|
|
|
| Ying Shi
|
- Prepare for INTERSPEECH PAPER
- Utilizing a Shallow CTC Loss to Permute the Outputs in Multi-Talker ASR
- CR-SOT / CT-SOT-pretrain-fix / CR-SOT pretrain joint / FIFO-SOT / PIT-SOT
- testing [ in progress ]
|
|
|
| Zhenghai You
|
- Some experiments validate cohort
|
|
|
| Junming Yuan
|
- MT-pretraining double check exp + extend exp[2]
- Identified the influence with the BN layer in 10-shot/5-shot exp.
- Extend a new pretrained model(training on clean data with BCE loss)
- Report performance differences on fixing difference layers in finetuning.(after group meeting)
|
|
|
| Chen Chen
|
- reproduce robustness experiments [3]
|
|
|
| Xiaolou Li
|
- robustness experiments of AVSR system
- white noise and pink noise experiment
- reproduce RealForensics
|
|
|
| Zehua Liu
|
|
|
|
| Pengqi Li
|
- [4] Attention supervise learning with Liuhuan
- Confirm code for train step
- But performance is not better than without supervise
- Assume and Analysis
- Jinfu and Xueying summarized previous work
|
|
|
| Wan Lin
|
|
|
|
| Tianhao Wang
|
- SE Adapter assumption verification exps [6]
- assumption: entire fine-tuning = CNN refinement + SE adaptation
|
|
|
| Zhenyu Zhou
|
- Extensive Speaker Pertubation[7]:
- VTLP results in cn1&vox1
- VTLP+Speed results in cn1&vox1
- Future Experiment Design
|
|
|
| Junhui Chen
|
- Weekly report
- Neural scoring code debug
|
|
|
| Jiaying Wang
|
- experiments of cohort pit[8]
- result comparison with other cohort choices with train-100 training set
|
|
|
| Yu Zhang
|
- financial-pipeline
- portfolio analysis code
- write doc
|
- Fix some bugs found while self checking
- Check out the entire process with Jun Wang
|
|
| Wenqiang Du
|
- Project coordination and related file archiving
- Closing of the DiTing project
|
|
|
| Yang Wei
|
- Review some FreeNeb release directories for reference
- Concurrence performance problem for Huilan ASR
|
|
|
| Lily
|
- Interspeech2024[9]
- Journal paper outline[10]
|
- Learn Principles of data-sharing on CHILDED
- Check Coco's annotation
- transfer journal paper from rubbish to little sth.
|
|