“2025-12-08”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第156行: 第156行:
 
|Yu Zhang
 
|Yu Zhang
 
||
 
||
*  
+
* GPU Util [https://z1et6d3xtb.feishu.cn/wiki/XX4NwX3tJiBDcgkMi0hcFUtInHh?from=from_copylink]
 +
* LLM
 +
** Reflection methods that provide more detailed feedback (for example, offering a full solution rather than just keywords describing the mistake) are more likely to receive higher ECS scores.
 +
** When LLMs use Reflection feedback, ECS and PKS scores exhibit a strong negative correlation.
 
||
 
||
 
*
 
*

2025年12月8日 (一) 10:25的版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
Lantian Li
Ying Shi
Zhenghai You
  • Some support work for Huawei SS project
  • Iterative Adaptation TSE:
    • Complete the training framework code
    • Training 16k Base model and IRA model as baselines to adapt to Hubert or WavLM
Junming Yuan
Xiaolou Li
Zehua Liu
  • interview
  • writing code of VLLM Iterative Decode
Pengqi Li
Wan Lin
Tianhao Wang
  • ChainSep paper (Chinese version is almost done)
Xiaoxue Luo
  • attractor-based speech separation:2-mix training
    • 40% overlap rate: sisdr = 8.024
    • random overlap rate: sisdr = 9.519
Junhui Chen
Jiaying Wang
Yu Zhang
  • GPU Util [1]
  • LLM
    • Reflection methods that provide more detailed feedback (for example, offering a full solution rather than just keywords describing the mistake) are more likely to receive higher ECS scores.
    • When LLMs use Reflection feedback, ECS and PKS scores exhibit a strong negative correlation.
Wenqiang Du
Yang Wei
Yue Gu
  • go back to Harbin
  • write the Phd Thesis
Qi Qu
Lily
  • AIGE Annual Forum
  • Journal paper submission