“2026-01-19”版本间的差异
来自cslt Wiki
Yuanjunming(讨论 | 贡献) |
|||
| 第102行: | 第102行: | ||
** Continued pre-training for 600K steps. there is still no improvement observed on clean-speech tasks. | ** Continued pre-training for 600K steps. there is still no improvement observed on clean-speech tasks. | ||
*** Aug-MT-HuBERT has more substitution errors in clean ASR adaptation. | *** Aug-MT-HuBERT has more substitution errors in clean ASR adaptation. | ||
| − | * | + | * Incorporate a "learn-not-to-listen" mechanism into MT-HuBERT and retrain the backbone. (in progress) |
** At 425K steps, PR(PER%): 8.18%, ASR(WER%): 9.32%, SD(DER%): 5.05% | ** At 425K steps, PR(PER%): 8.18%, ASR(WER%): 9.32%, SD(DER%): 5.05% | ||
* middle school AI textbook checking(picture, table) | * middle school AI textbook checking(picture, table) | ||
2026年1月19日 (一) 10:30的版本
| People | This Week | Next Week | Task Tracking (DeadLine) |
|---|---|---|---|
| Dong Wang |
|
|
|
| Lantian Li |
|
|
|
| Wenqiang Du |
|
|
|
| Yang Wei |
|
|
|
| Ying Shi |
|
|
|
| Yue Gu |
|
|
|
| Lily |
|
|
|
| Pengqi Li |
|
|
|
| Junming Yuan |
|
|
|
| Yu Zhang |
|
|
|
| Junhui Chen |
|
|
|
| Jiaying Wang |
|
|
|
| Bochao Hu |
|
|
|
| Hongcheng Zhang |
|
|
|
| Weiman Sun |
|
|
|