“2024-06-17”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第162行: 第162行:
 
|Jiaying Wang
 
|Jiaying Wang
 
||
 
||
*
+
* debug cohort transformer structure (confused why transformer does not work)
 +
** deeper network: 2 attention head, 8 layer/block, 4 blocks in total(failed)
 +
** use only MF training set (failed)
 +
** use position encoding and transformer block in speechbrain(failed both pit and sisdr loss)
 +
 
 
||
 
||
 
*
 
*

2024年6月17日 (一) 10:46的版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • Review of a few papers from NCMMSC, MDPI etc.
  • Review papers regarding AI for Medicine
  • Refine "the principle of AI education in primary schools"
  • A few public talks
Lantian Li
Ying Shi
Zhenghai You
  • Start training on complete data for Huawei TSE project
  • Change adaptlayer from cancat to FiLm [1]
  • Test inference time and SISDR in online model
  • Consider a TSE network that combines mixture and enrollment with attractor to extract speaker information
Junming Yuan
  • SSL model finetuning analysis v1[2].Need to check.
Chen Chen
Xiaolou Li
Zehua Liu
  • NCMMSC papper
  • change parameter
Pengqi Li
Wan Lin
Tianhao Wang
  • Neural Scoring exps[3]
    • share encoder
    • channel attention (similar to EA-ASP, useless)
    • early frequency attention (fbank level, training)
Zhenyu Zhou
Junhui Chen
  • Neural Scoring supplementary experiments
    • Share Encoder NS: NS > Share Encoder NS > EA-ASP (Importance of decoupling)
    • Ways of attention (F-bank Enroll-Aware, seems useful)
Jiaying Wang
  • debug cohort transformer structure (confused why transformer does not work)
    • deeper network: 2 attention head, 8 layer/block, 4 blocks in total(failed)
    • use only MF training set (failed)
    • use position encoding and transformer block in speechbrain(failed both pit and sisdr loss)
Yu Zhang
Wenqiang Du
  • Preparing for the final exam
Yang Wei
Lily
  • Thesis
  • Prepare slides for Xinjiang teacher's course
Turi
  • End of semester course project presentations
Yue Gu
Qi Qu