“2024-07-08”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(18位用户的26个中间修订版本未显示)
第6行: 第6行:
 
|Dong Wang
 
|Dong Wang
 
||
 
||
*
+
* ISCSLP 2 papers refinement
 +
* AI Graph slildes checking (to chapt 23)
 +
* Content design for medcine vocational education
 +
* Paper review for NC
 +
 
 +
 
 
||
 
||
 
*
 
*
第17行: 第22行:
 
|Lantian Li
 
|Lantian Li
 
||
 
||
*
+
* GPU status [https://z1et6d3xtb.feishu.cn/wiki/XGcGwRK5viJmpRkjH9AczIhynCh]
 +
** Rabbit05 will be ready in this week.
 +
* Projects
 +
** AED -> miniaturization
 +
** TSE -> finish 1st phase delivery
 +
** VSR -> start a new data collection phase
 +
** Finance -> R^2 testing
 +
* Papers
 +
** NeuralScoring
 +
** check ISCSLP paper
 +
* AI graph
 +
** Slides checking (18/50)
 +
** High school handbook (2/40)
 
||
 
||
 
*
 
*
第28行: 第45行:
 
|Ying Shi
 
|Ying Shi
 
||
 
||
*  
+
* Text enroll keywords spotting [https://z1et6d3xtb.feishu.cn/docx/LoFAdgVpgo3YlAx1oBAcc9uwn5b?from=from_copylink here]
 
||
 
||
 
*
 
*
第39行: 第56行:
 
|Zhenghai You
 
|Zhenghai You
 
||
 
||
*
+
* Finish huawei project first phase delivery
 
||
 
||
 
*
 
*
第49行: 第66行:
 
|Junming Yuan
 
|Junming Yuan
 
||
 
||
*
+
* check the multi-lingual experiment again, the result in [https://z1et6d3xtb.feishu.cn/docx/B2Etd5sKwo9jiyx30Vwc9tbXnue]
 +
** the opposite trend appears on different English datasets
 +
** our MT pretrained model show the better performance in 2-mixed test with multi-lingual
 
||
 
||
 
*
 
*
第71行: 第90行:
 
|Xiaolou Li
 
|Xiaolou Li
 
||
 
||
*
+
* Different length inference test
 +
* MLLM paper reading and LLaMA-Factroy testing
 +
* Report and Interview preparation
 
||
 
||
*
+
* Prepare dataset for LLaMA finetuning
 +
* Try different PEFT method.
 
||
 
||
 
*
 
*
第82行: 第104行:
 
|Zehua Liu
 
|Zehua Liu
 
||
 
||
*
+
* Friday Report
 +
* HUAWEI Interview
 +
* Training For VSP-LLM(443h)
 +
 
 
||
 
||
 
*
 
*
第93行: 第118行:
 
|Pengqi Li
 
|Pengqi Li
 
||
 
||
*
+
* Supervised learning of the ASP has been successfully trained[https://z1et6d3xtb.feishu.cn/docx/PgYpdmtH2oE1YexbDB8c5jW0nTh].
 
||
 
||
 
*
 
*
第104行: 第129行:
 
|Wan Lin
 
|Wan Lin
 
||
 
||
*
+
* Neural Scoring
 +
** Experiments(CN & layer_num & chunk_len)
 +
** Paper revision
 
||
 
||
 
*
 
*
第115行: 第142行:
 
|Tianhao Wang
 
|Tianhao Wang
 
||
 
||
*
+
* Neural Scoring [https://z1et6d3xtb.feishu.cn/docx/BywjdkGvNou12sxQ4dAcxYa9noh]
 +
** parameter tuning for three genre CN fine-tuning (minDCF is weak)
 +
** noisy training and testing with musan (minDCF is weak)
 
||
 
||
 
*
 
*
第126行: 第155行:
 
|Zhenyu Zhou
 
|Zhenyu Zhou
 
||
 
||
*
+
*Huawei Project Submission
 
||
 
||
 
*
 
*
第137行: 第166行:
 
|Junhui Chen
 
|Junhui Chen
 
||
 
||
*
+
* Neural Scoring
 +
** One transformer encoder layer exps (get good performance)
 +
** paper refinement
 
||
 
||
 
*
 
*
第148行: 第179行:
 
|Jiaying Wang
 
|Jiaying Wang
 
||
 
||
*
+
* dptnet wsj 2mix(training)
 +
* dptnet libri3ix (done)
 +
* dptnet libri3mix cohort(training, seems overfit with poor performance)
 +
* condition chain code preparing
 
||
 
||
 
*
 
*
第159行: 第193行:
 
|Yu Zhang
 
|Yu Zhang
 
||
 
||
*
+
* R2SAC result [https://z1et6d3xtb.feishu.cn/docx/Bs3gd4rk7oSsfaxBUYhc35Ssn2c], The results were not what we expected
 
||
 
||
*
+
*  
 
||
 
||
 
*
 
*
第170行: 第204行:
 
|Wenqiang Du
 
|Wenqiang Du
 
||
 
||
*
+
* Training of some local dialect models [https://z1et6d3xtb.feishu.cn/docx/VLEAdDq2QoQZ7IxnpfBcqBmwnqb?from=from_copylink]
 
||
 
||
 
*
 
*
第181行: 第215行:
 
|Yang Wei
 
|Yang Wei
 
||
 
||
*
+
* AIBabel
 +
** Train Uyghur and Kazakh KWS model.
 
||
 
||
 
*
 
*
第191行: 第226行:
 
|Lily
 
|Lily
 
||
 
||
* Thesis
+
* Thesis writing
 
* ISCSLP paper submission
 
* ISCSLP paper submission
 
* AIRadiance daily works
 
* AIRadiance daily works
第204行: 第239行:
 
|Turi
 
|Turi
 
||
 
||
*
+
* Updated the Data collection app to enable uploading in the background while users record.
 +
* Prepared 60hrs of data to start experiment
 +
* Tried running using wenet toolkit for few epochs(loss fluctuates)
 
||
 
||
 
*
 
*
第212行: 第249行:
 
|Yue Gu
 
|Yue Gu
 
||
 
||
*
+
* paper writing: finish 5.5 pages (5.5/9)
 +
* find a bug which influence the real time factor (RTF), now test again
 
||
 
||
 
*
 
*
第221行: 第259行:
 
|Qi Qu
 
|Qi Qu
 
||
 
||
*  
+
* AED:
 +
** Fixed some bugs while developing c/jni/python/go libs.
 +
** Unit test.
 +
** More positive/negative samples collected for classifier training.
 +
* KWS:
 +
** Data collected and cleaned for the new Mandarin Chinese wordlist: 48 keywords, ~200 speakers, ~60k audio segments.
 +
** Contextual keyword data (keyword embedded in contextual utterances) collected and annotated (and yet to be delivered).
 
||
 
||
*
+
* AED:
 +
** Classifier to be trained.
 +
** On-device integration test.
 +
* KWS:
 +
** Test datasets to be delivered.
 
||
 
||
 
*   
 
*   
 
|-
 
|-

2024年7月8日 (一) 10:57的最后版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • ISCSLP 2 papers refinement
  • AI Graph slildes checking (to chapt 23)
  • Content design for medcine vocational education
  • Paper review for NC


Lantian Li
  • GPU status [1]
    • Rabbit05 will be ready in this week.
  • Projects
    • AED -> miniaturization
    • TSE -> finish 1st phase delivery
    • VSR -> start a new data collection phase
    • Finance -> R^2 testing
  • Papers
    • NeuralScoring
    • check ISCSLP paper
  • AI graph
    • Slides checking (18/50)
    • High school handbook (2/40)
Ying Shi
  • Text enroll keywords spotting here
Zhenghai You
  • Finish huawei project first phase delivery
Junming Yuan
  • check the multi-lingual experiment again, the result in [2]
    • the opposite trend appears on different English datasets
    • our MT pretrained model show the better performance in 2-mixed test with multi-lingual
Chen Chen
Xiaolou Li
  • Different length inference test
  • MLLM paper reading and LLaMA-Factroy testing
  • Report and Interview preparation
  • Prepare dataset for LLaMA finetuning
  • Try different PEFT method.
Zehua Liu
  • Friday Report
  • HUAWEI Interview
  • Training For VSP-LLM(443h)
Pengqi Li
  • Supervised learning of the ASP has been successfully trained[3].
Wan Lin
  • Neural Scoring
    • Experiments(CN & layer_num & chunk_len)
    • Paper revision
Tianhao Wang
  • Neural Scoring [4]
    • parameter tuning for three genre CN fine-tuning (minDCF is weak)
    • noisy training and testing with musan (minDCF is weak)
Zhenyu Zhou
  • Huawei Project Submission
Junhui Chen
  • Neural Scoring
    • One transformer encoder layer exps (get good performance)
    • paper refinement
Jiaying Wang
  • dptnet wsj 2mix(training)
  • dptnet libri3ix (done)
  • dptnet libri3mix cohort(training, seems overfit with poor performance)
  • condition chain code preparing
Yu Zhang
  • R2SAC result [5], The results were not what we expected
Wenqiang Du
  • Training of some local dialect models [6]
Yang Wei
  • AIBabel
    • Train Uyghur and Kazakh KWS model.
Lily
  • Thesis writing
  • ISCSLP paper submission
  • AIRadiance daily works
  • Live broadcast
Turi
  • Updated the Data collection app to enable uploading in the background while users record.
  • Prepared 60hrs of data to start experiment
  • Tried running using wenet toolkit for few epochs(loss fluctuates)
Yue Gu
  • paper writing: finish 5.5 pages (5.5/9)
  • find a bug which influence the real time factor (RTF), now test again
Qi Qu
  • AED:
    • Fixed some bugs while developing c/jni/python/go libs.
    • Unit test.
    • More positive/negative samples collected for classifier training.
  • KWS:
    • Data collected and cleaned for the new Mandarin Chinese wordlist: 48 keywords, ~200 speakers, ~60k audio segments.
    • Contextual keyword data (keyword embedded in contextual utterances) collected and annotated (and yet to be delivered).
  • AED:
    • Classifier to be trained.
    • On-device integration test.
  • KWS:
    • Test datasets to be delivered.