2024-06-17

来自cslt Wiki
2024年6月17日 (一) 11:00Shiying讨论 | 贡献的版本

跳转至: 导航搜索
People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • Review of a few papers from NCMMSC, MDPI etc.
  • Review papers regarding AI for Medicine
  • Refine "the principle of AI education in primary schools"
  • A few public talks
Lantian Li
  • GPU status [1]
  • Completed all teaching for this semester.
  • Projects
    • AED -> System integration with Huawei, Bullying patent for FYT
    • TSE -> Preparing for the first phase delivery.
    • VSR -> 1200h+
    • Finance -> Reproducing R^2 SAC, Overview time-series modelling
  • Papers
    • NeuralScoring -> In progress
    • IS24 Camera-ready paper
    • CNVSRC 2024 baseline paper
Ying Shi
Zhenghai You
  • Start training on complete data for Huawei TSE project
  • Change adaptlayer from cancat to FiLm [2]
  • Test inference time and SISDR in online model
  • Consider a TSE network that combines mixture and enrollment with attractor to extract speaker information
Junming Yuan
  • SSL model finetuning analysis v1[3].Need to check.
Chen Chen
  • Release UY/CH-CHILD dataset
  • help with NCMMSC paper about CNVSRC 2024
  • CN-CVS2
    • 1200+ hours data, phase 1 finished in June 26th
    • hand over the CN-CVS2 website things
  • ISCSLP paper (7.7 20:00 ddl)
Xiaolou Li
  • LRS2 full test[4]
  • Paper reading
Zehua Liu
  • NCMMSC papper
  • change parameter result seems good ,(but still training)[5]
Pengqi Li
  • PhD mid-term assessment
  • two NC-papers
Wan Lin
Tianhao Wang
  • Neural Scoring exps[6]
    • share encoder
    • channel attention (similar to EA-ASP, useless)
    • early frequency attention (fbank level, training)
Zhenyu Zhou
  • Huawei projetc
    • Summary of recent experimental results[7]
Junhui Chen
  • Neural Scoring supplementary experiments
    • Share Encoder NS: NS > Share Encoder NS > EA-ASP (Importance of decoupling)
    • Ways of attention (F-bank Enroll-Aware, seems useful)
Jiaying Wang
  • debug cohort transformer structure (confused why transformer does not work)
    • deeper network: 2 attention head, 8 layer/block, 4 blocks in total(failed)
    • use only MF training set (failed)
    • use position encoding and transformer block in speechbrain(failed both pit and sisdr loss)
Yu Zhang
  • Implement R2SAC
  • Retrain Huawei Quantization Model
  • Paper reading
Wenqiang Du
  • Preparing for the final exam
Yang Wei
  • Huilan TTS
    • Export ONNX model from original format. Still deal with inferring error.
Lily
  • Thesis
  • Prepare slides for Xinjiang teacher's course
Turi
  • End of semester course project presentations
Yue Gu
Qi Qu
  • AED
    • Model tested on different data.
    • Tried some other models, i.e. Zipformer from sherpa-onnx.
  • KWS
    • Data collected and processed to account for poor performance.