“2024-07-08”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(11位用户的14个中间修订版本未显示)
第45行: 第45行:
 
|Ying Shi
 
|Ying Shi
 
||
 
||
*  
+
* Text enroll keywords spotting [https://z1et6d3xtb.feishu.cn/docx/LoFAdgVpgo3YlAx1oBAcc9uwn5b?from=from_copylink here]
 
||
 
||
 
*
 
*
第56行: 第56行:
 
|Zhenghai You
 
|Zhenghai You
 
||
 
||
*
+
* Finish huawei project first phase delivery
 
||
 
||
 
*
 
*
第66行: 第66行:
 
|Junming Yuan
 
|Junming Yuan
 
||
 
||
*
+
* check the multi-lingual experiment again, the result in [https://z1et6d3xtb.feishu.cn/docx/B2Etd5sKwo9jiyx30Vwc9tbXnue]
 +
** the opposite trend appears on different English datasets
 +
** our MT pretrained model show the better performance in 2-mixed test with multi-lingual
 
||
 
||
 
*
 
*
第88行: 第90行:
 
|Xiaolou Li
 
|Xiaolou Li
 
||
 
||
*
+
* Different length inference test
 +
* MLLM paper reading and LLaMA-Factroy testing
 +
* Report and Interview preparation
 
||
 
||
*
+
* Prepare dataset for LLaMA finetuning
 +
* Try different PEFT method.
 
||
 
||
 
*
 
*
第99行: 第104行:
 
|Zehua Liu
 
|Zehua Liu
 
||
 
||
*
+
* Friday Report
 +
* HUAWEI Interview
 +
* Training For VSP-LLM(443h)
 +
 
 
||
 
||
 
*
 
*
第110行: 第118行:
 
|Pengqi Li
 
|Pengqi Li
 
||
 
||
*
+
* Supervised learning of the ASP has been successfully trained[https://z1et6d3xtb.feishu.cn/docx/PgYpdmtH2oE1YexbDB8c5jW0nTh].
 
||
 
||
 
*
 
*
第134行: 第142行:
 
|Tianhao Wang
 
|Tianhao Wang
 
||
 
||
*
+
* Neural Scoring [https://z1et6d3xtb.feishu.cn/docx/BywjdkGvNou12sxQ4dAcxYa9noh]
 +
** parameter tuning for three genre CN fine-tuning (minDCF is weak)
 +
** noisy training and testing with musan (minDCF is weak)
 
||
 
||
 
*
 
*
第145行: 第155行:
 
|Zhenyu Zhou
 
|Zhenyu Zhou
 
||
 
||
*
+
*Huawei Project Submission
 
||
 
||
 
*
 
*
第156行: 第166行:
 
|Junhui Chen
 
|Junhui Chen
 
||
 
||
*
+
* Neural Scoring
 +
** One transformer encoder layer exps (get good performance)
 +
** paper refinement
 
||
 
||
 
*
 
*
第181行: 第193行:
 
|Yu Zhang
 
|Yu Zhang
 
||
 
||
*
+
* R2SAC result [https://z1et6d3xtb.feishu.cn/docx/Bs3gd4rk7oSsfaxBUYhc35Ssn2c], The results were not what we expected
 
||
 
||
*
+
*  
 
||
 
||
 
*
 
*
第203行: 第215行:
 
|Yang Wei
 
|Yang Wei
 
||
 
||
*
+
* AIBabel
 +
** Train Uyghur and Kazakh KWS model.
 
||
 
||
 
*
 
*
第229行: 第242行:
 
* Prepared 60hrs of data to start experiment
 
* Prepared 60hrs of data to start experiment
 
* Tried running using wenet toolkit for few epochs(loss fluctuates)
 
* Tried running using wenet toolkit for few epochs(loss fluctuates)
*
 
*
 
 
||
 
||
 
*
 
*
第248行: 第259行:
 
|Qi Qu
 
|Qi Qu
 
||
 
||
*  
+
* AED:
 +
** Fixed some bugs while developing c/jni/python/go libs.
 +
** Unit test.
 +
** More positive/negative samples collected for classifier training.
 +
* KWS:
 +
** Data collected and cleaned for the new Mandarin Chinese wordlist: 48 keywords, ~200 speakers, ~60k audio segments.
 +
** Contextual keyword data (keyword embedded in contextual utterances) collected and annotated (and yet to be delivered).
 
||
 
||
*
+
* AED:
 +
** Classifier to be trained.
 +
** On-device integration test.
 +
* KWS:
 +
** Test datasets to be delivered.
 
||
 
||
 
*   
 
*   
 
|-
 
|-

2024年7月8日 (一) 10:57的最后版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • ISCSLP 2 papers refinement
  • AI Graph slildes checking (to chapt 23)
  • Content design for medcine vocational education
  • Paper review for NC


Lantian Li
  • GPU status [1]
    • Rabbit05 will be ready in this week.
  • Projects
    • AED -> miniaturization
    • TSE -> finish 1st phase delivery
    • VSR -> start a new data collection phase
    • Finance -> R^2 testing
  • Papers
    • NeuralScoring
    • check ISCSLP paper
  • AI graph
    • Slides checking (18/50)
    • High school handbook (2/40)
Ying Shi
  • Text enroll keywords spotting here
Zhenghai You
  • Finish huawei project first phase delivery
Junming Yuan
  • check the multi-lingual experiment again, the result in [2]
    • the opposite trend appears on different English datasets
    • our MT pretrained model show the better performance in 2-mixed test with multi-lingual
Chen Chen
Xiaolou Li
  • Different length inference test
  • MLLM paper reading and LLaMA-Factroy testing
  • Report and Interview preparation
  • Prepare dataset for LLaMA finetuning
  • Try different PEFT method.
Zehua Liu
  • Friday Report
  • HUAWEI Interview
  • Training For VSP-LLM(443h)
Pengqi Li
  • Supervised learning of the ASP has been successfully trained[3].
Wan Lin
  • Neural Scoring
    • Experiments(CN & layer_num & chunk_len)
    • Paper revision
Tianhao Wang
  • Neural Scoring [4]
    • parameter tuning for three genre CN fine-tuning (minDCF is weak)
    • noisy training and testing with musan (minDCF is weak)
Zhenyu Zhou
  • Huawei Project Submission
Junhui Chen
  • Neural Scoring
    • One transformer encoder layer exps (get good performance)
    • paper refinement
Jiaying Wang
  • dptnet wsj 2mix(training)
  • dptnet libri3ix (done)
  • dptnet libri3mix cohort(training, seems overfit with poor performance)
  • condition chain code preparing
Yu Zhang
  • R2SAC result [5], The results were not what we expected
Wenqiang Du
  • Training of some local dialect models [6]
Yang Wei
  • AIBabel
    • Train Uyghur and Kazakh KWS model.
Lily
  • Thesis writing
  • ISCSLP paper submission
  • AIRadiance daily works
  • Live broadcast
Turi
  • Updated the Data collection app to enable uploading in the background while users record.
  • Prepared 60hrs of data to start experiment
  • Tried running using wenet toolkit for few epochs(loss fluctuates)
Yue Gu
  • paper writing: finish 5.5 pages (5.5/9)
  • find a bug which influence the real time factor (RTF), now test again
Qi Qu
  • AED:
    • Fixed some bugs while developing c/jni/python/go libs.
    • Unit test.
    • More positive/negative samples collected for classifier training.
  • KWS:
    • Data collected and cleaned for the new Mandarin Chinese wordlist: 48 keywords, ~200 speakers, ~60k audio segments.
    • Contextual keyword data (keyword embedded in contextual utterances) collected and annotated (and yet to be delivered).
  • AED:
    • Classifier to be trained.
    • On-device integration test.
  • KWS:
    • Test datasets to be delivered.