2024-11-11

来自cslt Wiki
2024年11月11日 (一) 10:59Quqi讨论 | 贡献的版本

跳转至: 导航搜索
People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • Tianjian AI book (done)
Lantian Li
  • Complete all the script for the 2025 AI calendar
  • AI-Graph EN (32/50)
Ying Shi
Zhenghai You
  • Huawei project with IRA-TSE[1]
Junming Yuan
  • re-check some details from Cocktail HuBERT paper and prepared the code.
    • pseudo-label preparation finished.
  • paper reading
Xiaolou Li
  • Finish VTS documents with Zehua
  • Process the CVS3 data
  • Inherit the AV-HuBERT training code and debug
Zehua Liu
  • Finish 2 VTS documents with Xiaolou
    • Financial Document
    • Technical Document
  • Paper Reading on last Friday
Pengqi Li
Wan Lin
Tianhao Wang
  • ablation study about some new approach for sound separation [2]
Xiaoxue Luo
  • paper reading to investigate some new approach for sound separation
  • retrain AudioSep with a DPRNN block(AudioSep-DP)
Zhenyu Zhou
  • Attemp to add silence loss during training(seems like useless)
  • Conditional Chain 2-5 mix results(still some bugs,the acc of speaker number is poor)[3]
Junhui Chen
Jiaying Wang
Yu Zhang
  • SocioDojo
    • Single stock (TSLA) investment (still running)
  • Investigate some Text guided LLM centric time-series forecaster and reproduce some of them (Time-LLM LLM-Process, AutoTimes), and some toy experiment about how prompt prefix influence the forecast result
Wenqiang Du
  • Training of New language Models(Cantonese)
  • Prepare the PPT for the competition
Yang Wei
  • Train text enroll KWS model with 7000h data
Lily
Turi
  • kws data preparation and checking some implementations
  • Paper Reading about kws
Yue Gu
  • use CosyVoice model to synthesize the target speaker utterance, which is employed as the supplement for target speaker adaptation. The adaptation exp is running.
  • icassp 2025 paper review
  • paper writing
Qi Qu
  • KWS:
    • Yi (Liangshan, Sichuan) test dataset annotated and finalized. Optimal thresholds for predefined scenes. Cloud model service deployed.
    • Quantization for NPU with more calibration data (6k): mean_loss=1.3e-4, max_loss=6.2e-2.
    • NPU demo: feature extraction + model inference.