|
|
第149行: |
第149行: |
| |Junhui Chen | | |Junhui Chen |
| || | | || |
− | * | + | * Neural scoring code debug |
| || | | || |
| * | | * |
People |
This Week |
Next Week |
Task Tracking (DeadLine)
|
Dong Wang
|
- Uyghur database paper, draft done.
- ICME review, almost done.
- MicroMagnetic paper, before final check.
|
|
|
Lantian Li
|
- GPU status [1]
- ASIP-BUPT (CohortTSE, SE-Adapter, SpeakerAug, NeuralScoring)
- Huawei project (Phase 1st)
|
|
|
Ying Shi
|
|
|
|
Zhenghai You
|
|
|
|
Junming Yuan
|
- MT-pretraining double check exp + extend exp[2]
- Identified the influence with the BN layer in 10-shot/5-shot exp.
- Extend a new pretrained model(training on clean data with BCE loss)
- Report performance differences on fixing difference layers in finetuning.(after group meeting)
|
|
|
Chen Chen
|
- reproduce robustness experiments [3]
|
|
|
Xiaolou Li
|
- robustness experiments of AVSR system
|
|
|
Zehua Liu
|
|
|
|
Pengqi Li
|
- [4] Attention supervise learning with Liuhuan
- Confirm code for train step
- But performance is not better than without supervise
- Assume and Analysis
- Jinfu and Xueying summarized previous work
|
|
|
Wan Lin
|
|
|
|
Tianhao Wang
|
- SE Adapter assumption verification exps [6]
- assumption: entire fine-tuning = CNN refinement + SE adaptation
|
|
|
Zhenyu Zhou
|
|
|
|
Junhui Chen
|
- Neural scoring code debug
|
|
|
Jiaying Wang
|
- experiments of cohort pit[7]
- result comparison with other cohort choices with train-100 training set
|
|
|
Yu Zhang
|
- financial-pipeline
- portfolio analysis code
- write doc
|
- Fix some bugs found while self checking
- Check out the entire process with Jun Wang
|
|
Wenqiang Du
|
- Project coordination and related file archiving
- Closing of the DiTing project
|
|
|
Yang Wei
|
- Review some FreeNeb release directories for reference
- Concurrence performance problem for Huilan ASR
|
|
|
Lily
|
- Interspeech2024[8]
- Journal paper draft preparation[9]
|
|
|