“2024-03-18”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第37行: 第37行:
 
** PIT-SOT 23.26
 
** PIT-SOT 23.26
 
** random-order SOT 26.20
 
** random-order SOT 26.20
 +
* [https://z1et6d3xtb.feishu.cn/docx/Dx2pdEMOroS9w4xX6C2cUPn0nOd?from=from_copylink group work]
 
||
 
||
 
*  
 
*  

2024年3月18日 (一) 10:19的版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • Interspeech 2024 paper refinement
  • Design/Discussion AI popular science
  • Conjecture for minmum loss training
Lantian Li
Ying Shi
  • Finish INTERSPEECH paper
  • Investigate random order SOT for multi-talker ASR task
  • 3-mix 0s offset test condition
    • DOM-SOT 20.51
    • PIT-SOT 23.26
    • random-order SOT 26.20
  • group work
Zhenghai You
Junming Yuan
  • Make the plan for the large vocabulary pretraining task.
    • Focus on the experimental details of the few-shot paper from Google.
    • Try to address the 3 questions:
      • How to change MT pretraining model structure?
      • How to train three strictly comparable pretraining models based on MT, Hubert, and wav2vec?
      • Why does Hubert+MT perform significantly better?
Chen Chen
Xiaolou Li
Zehua Liu
Pengqi Li
Wan Lin
Tianhao Wang
Zhenyu Zhou
  • InterSpeech2024 submission
  • Code reorganization
  • Neuro scoring reviewing
Junhui Chen
Jiaying Wang
Yu Zhang
Wenqiang Du
  • Aibabel
    • Control Uyghur KWS model FA,but not get a good performance yet.
    • Continue test and update CN KWS model
Yang Wei
Lily
Turi
  • Data collection app[1]
  • Course works