“2024-09-23”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第58行: 第58行:
 
* double check mixed Hubert code:
 
* double check mixed Hubert code:
 
** fix some bugs (time-mask.etc)
 
** fix some bugs (time-mask.etc)
** time-mask vs. feat mask: (27.98%, 23.17%) vs.(23.19%, 25.99%)
+
** time-mask vs. feat mask: (Top-1 acc, EER): (27.98%, 23.17%) vs.(23.19%, 25.99%)
 
** softmax+CE --> sigmoid+BCE still have problem.
 
** softmax+CE --> sigmoid+BCE still have problem.
 
||
 
||

2024年9月23日 (一) 10:48的版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • AIGraph high education version
  • Prepare AIgraph Large Model version
  • NMI paper publication staff
Lantian Li
  • AI-Graph EN (1/4)
  • Huawei Project Proposal v1.0
  • First Lesson on 24-fall AI Undergraduates
Ying Shi
  • Huawei project proposal
  • Optimize the Text-enroll KWS code
    • improve readability.
    • remove redundant code.
Zhenghai You
  • Exploring the generality of spk aug on different data and structures[1]
Junming Yuan
  • double check mixed Hubert code:
    • fix some bugs (time-mask.etc)
    • time-mask vs. feat mask: (Top-1 acc, EER): (27.98%, 23.17%) vs.(23.19%, 25.99%)
    • softmax+CE --> sigmoid+BCE still have problem.
Xiaolou Li
  • Writing VTS documents
  • Paper Reading & Preparing for Report
  • Exp on LRS3
    • LLM: LLaMA2 -> LLaMA3.1 (30h ↓0.4%)
    • Grouping LLaMA2: (443h ↑0.5%, 30h ↓2.5%)
  • Rethinking the method to inject information (ablation study first)
Zehua Liu
Pengqi Li
Wan Lin
  • VC2 pre-train; VB1+VC2 mix-turning
    • Data filter in VB1: 1.25% EER in vox1-o
  • VB1 pre-train; VC2 fine-tuning
    • VB1 pre-train: 2.61% EER in vox1-o
    • VC2 fine-tuning: maybe couldn't reach better performance
Tianhao Wang
Zhenyu Zhou
Junhui Chen
Jiaying Wang
Yu Zhang
  • Dataset collection from THS
  • Retraining R^2 SAC paper, with same env still failed (TCN ACC: 0.708, RECALL: 0.183), will check with Han this week
  • Paper reading and some plan (report this Fri)
Wenqiang Du
Yang Wei
  • Train text enroll KWS model with aishell and kespeech data.
  • Prepare live broadcast
Lily
  • Prepare for holiday course(October 2nd、3rd) and online-course
  • AI radiance's daily work
Turi
  • Trained conformer on Sagalee data excluding utterances containing digits
    • Achieved 21.28% WER, 2.65 WER reduction
  • Preparing KWS data from Sagalee dataset using MFA
Yue Gu
  • paper writing
  • open the code
  • prepare for the presentation
Qi Qu
  • KWS:
    • Finding ideal thresholds for b0-models in predefined scenes: Chinese Mandarin, Cantonese, Uyghur and Kazakh.
    • Finding ideal thresholds for b6-models with fixed b0-model thresholds.
  • AED:
    • Fixing parameters of Fbank feature extraction for CED and retraining classifiers.