People |
This Week |
Next Week |
Task Tracking (DeadLine)
|
Dong Wang
|
- Interspeech 2024 paper refinement
- Design/Discussion AI popular science
- Conjecture for minmum loss training
|
|
|
Lantian Li
|
|
|
|
Ying Shi
|
- Finish INTERSPEECH paper
- Investigate random order SOT for multi-talker ASR task
- 3-mix 0s offset test condition
- DOM-SOT 20.51
- PIT-SOT 23.26
- random-order SOT 26.20
- group work
|
|
|
Zhenghai You
|
|
|
|
Junming Yuan
|
- Make the plan for the large vocabulary pretraining task.
- Focus on the experimental details of the few-shot paper from Google.
- Try to address the 3 questions:
- How to change MT pretraining model structure?
- How to train three strictly comparable pretraining models based on MT, Hubert, and wav2vec?
- Why does Hubert+MT perform significantly better?
|
|
|
Chen Chen
|
|
|
|
Xiaolou Li
|
|
|
|
Zehua Liu
|
|
|
|
Pengqi Li
|
|
|
|
Wan Lin
|
|
|
|
Tianhao Wang
|
|
|
|
Zhenyu Zhou
|
- InterSpeech2024 submission
- Code reorganization
- Neuro scoring reviewing
|
|
|
Junhui Chen
|
|
|
|
Jiaying Wang
|
|
|
|
Yu Zhang
|
|
|
|
Wenqiang Du
|
- Aibabel
- Control Uyghur KWS model FA,but not get a good performance yet.
- Continue test and update CN KWS model
|
|
|
Yang Wei
|
|
|
|
Lily
|
|
|
|
Turi
|
- Data collection app[1]
- Course works
|
|
|