“2026-04-13”版本间的差异
来自cslt Wiki
| (13位用户的16个中间修订版本未显示) | |||
| 第17行: | 第17行: | ||
|Lantian Li | |Lantian Li | ||
|| | || | ||
| − | * | + | * NDRC daily work |
| + | * MLA book (3/4) | ||
|| | || | ||
* | * | ||
| 第28行: | 第29行: | ||
|Wenqiang Du | |Wenqiang Du | ||
|| | || | ||
| − | * | + | * Baseline testing of multimodal models(ing) |
|| | || | ||
* | * | ||
| 第39行: | 第40行: | ||
|Yang Wei | |Yang Wei | ||
|| | || | ||
| − | * | + | * Train audio separation model for 3 class (speech, song, bird). Dealing with low volume output problem. |
| + | * Test streaming AVSR demo (CER: mix: 72%, offline_separation: 20%, streaming_separation: 42%). | ||
|| | || | ||
* | * | ||
| 第50行: | 第52行: | ||
|Ying Shi | |Ying Shi | ||
|| | || | ||
| − | * | + | * revise my thesis |
|| | || | ||
* | * | ||
| 第72行: | 第74行: | ||
|Lily | |Lily | ||
|| | || | ||
| − | * | + | * AI handbook check (HK version) |
| + | * AIGE related tasks | ||
|| | || | ||
* | * | ||
| 第94行: | 第97行: | ||
|Junming Yuan | |Junming Yuan | ||
|| | || | ||
| − | * | + | * Preparing the materials for attending ICASSP |
| + | * ZH paper draft (need refine) | ||
|| | || | ||
* | * | ||
| 第105行: | 第109行: | ||
|Yu Zhang | |Yu Zhang | ||
|| | || | ||
| − | * | + | * GPU Util: [https://z1et6d3xtb.feishu.cn/wiki/XX4NwX3tJiBDcgkMi0hcFUtInHh] |
| + | * Chain level experiments: | ||
| + | ** After introducing the Metric Reward, the weights of correct edges converge faster compared to training with pure reinforcement learning alone. | ||
| + | ** The worse the situation when the Metric Reward is introduced (i.e., the lower the weights of critical edges), the more significant the difference compared to not using the Metric Reward. | ||
|| | || | ||
* | * | ||
| 第116行: | 第123行: | ||
|Junhui Chen | |Junhui Chen | ||
|| | || | ||
| − | * | + | * To strengthen the robustness of the conclusions, conducting additional experiments: |
| + | ** Introduce a new baseline (AgentPrune). | ||
| + | ** Add experiments on a new dataset (GSM8K). | ||
| + | ** Reproduce the results on other LLM base models. | ||
| + | * Paper writing | ||
|| | || | ||
* | * | ||
| 第125行: | 第136行: | ||
|- | |- | ||
| − | | | + | |Xiaoxue Luo |
|| | || | ||
| − | * | + | * attractor visualization analysis [https://z1et6d3xtb.feishu.cn/docx/BAoRdM2jQo19krxwH0pcowCsnih] |
| + | * The accuracy of attractor counting is lower than expected, may be due to the mixed scenes are complex(2-5mix), retrain the 2-3mix model | ||
|| | || | ||
* | * | ||
| 第138行: | 第150行: | ||
|Bochao Hu | |Bochao Hu | ||
|| | || | ||
| − | * | + | * meet the all requirements and hand over vts pipeline to Sun Chang, waiting for his test |
|| | || | ||
* | * | ||
| 第149行: | 第161行: | ||
|Hongcheng Zhang | |Hongcheng Zhang | ||
|| | || | ||
| − | * | + | *test MLLM for aibabel's project |
|| | || | ||
* | * | ||
| 第160行: | 第172行: | ||
|Weiman Sun | |Weiman Sun | ||
|| | || | ||
| − | * | + | * supplement our audioset dataset for specific classes |
| + | * test large multimodal models | ||
|| | || | ||
* | * | ||
| 第169行: | 第182行: | ||
|| | || | ||
*reproduce spatialnet for speech separation | *reproduce spatialnet for speech separation | ||
| + | *write my graduation thesis | ||
| + | || | ||
| + | * | ||
| + | || | ||
| + | * | ||
| + | |- | ||
| + | |- | ||
| + | |Shuailong Li | ||
| + | || | ||
| + | *read some papers | ||
| + | **USE(Sepformer and BSRNN and TDN) | ||
|| | || | ||
* | * | ||
2026年4月13日 (一) 11:11的最后版本
| People | This Week | Next Week | Task Tracking (DeadLine) |
|---|---|---|---|
| Dong Wang |
|
|
|
| Lantian Li |
|
|
|
| Wenqiang Du |
|
|
|
| Yang Wei |
|
|
|
| Ying Shi |
|
|
|
| Yue Gu |
|
|
|
| Lily |
|
|
|
| Pengqi Li |
|
|
|
| Junming Yuan |
|
|
|
| Yu Zhang |
|
|
|
| Junhui Chen |
|
|
|
| Xiaoxue Luo |
|
|
|
| Bochao Hu |
|
|
|
| Hongcheng Zhang |
|
|
|
| Weiman Sun |
|
|
|
| Ge Gao |
|
|
|
| Shuailong Li |
|
|
|