“2024-07-15”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(10位用户的12个中间修订版本未显示)
第6行: 第6行:
 
|Dong Wang
 
|Dong Wang
 
||
 
||
*
+
* Presentation for Chongqing event
 +
* NMI revision submitted
 
||
 
||
 
*
 
*
第26行: 第27行:
 
** Finance -> Fix data preparing bug
 
** Finance -> Fix data preparing bug
 
* Research
 
* Research
** NeuralScoring: Exps Done.
+
** NeuralScoring: Exps done.
** TSE: Reproduce DPRNN and IRA.
+
** TSE: Reproducing DPRNN-IRA.
 +
** VSR-LLM: ICL exps.
 +
** SoundFilter
 
* AI graph
 
* AI graph
 
** Slides checking (23/50)
 
** Slides checking (23/50)
 
** High school handbook (3/40)
 
** High school handbook (3/40)
||
 
*
 
 
||
 
||
 
* High school handbook (15/40)
 
* High school handbook (15/40)
 +
||
 +
*
 
|-
 
|-
  
第41行: 第44行:
 
|Ying Shi
 
|Ying Shi
 
||
 
||
*  
+
* Text enroll keyword spotting U-net
 +
** Current AUC(U-transformer-net): 0.98
 +
** Previous AUC(Normal transformer + keyword decoder): 0.94
 +
* More test for next HUAWEI project (TC-ASR)
 +
** aishell-clean-test KCS-CER: 9.34%
 +
** aishell-2mix-target-0dB-SNR KCS-CER: 16.36%
 +
** aishell-2mix-target-3dB-SNR KCS-CER:16.69%
 +
** FA test: 24h Yi Zhongtian audio:1065 times / 172800 times
 +
* Testing whether NLU can control FA
 +
** FA PPL on n-gram LM:  405.31
 +
** positive content on n-gram LM: 95.95
 
||
 
||
 
*
 
*
第52行: 第65行:
 
|Zhenghai You
 
|Zhenghai You
 
||
 
||
*
+
* Speed perturbation &  Utterance augment exp;DPRNN-IRA[https://z1et6d3xtb.feishu.cn/docx/WSQvdytICo3ZHwxvVvdcRTchnIg]
 
||
 
||
 
*
 
*
第72行: 第85行:
 
|Xiaolou Li
 
|Xiaolou Li
 
||
 
||
*
+
*VSP-LLM code(help by Zehua)
 +
*Llama Factory Fail
 +
*paper reading and report
 
||
 
||
 
*
 
*
第83行: 第98行:
 
|Zehua Liu
 
|Zehua Liu
 
||
 
||
*
+
*ICL code
 +
*VSP-LLM reproduce success
 +
*read paper and share
 
||
 
||
 
*
 
*
第94行: 第111行:
 
|Pengqi Li
 
|Pengqi Li
 
||
 
||
*
+
* Learned EA-ASP with the help of Tian Hao.
 +
* Researched various variants of attention pooling.
 +
* Designed a new attention pooling with condition.[https://z1et6d3xtb.feishu.cn/docx/PgYpdmtH2oE1YexbDB8c5jW0nTh]
 
||
 
||
*
+
* Test & Investigate
 
||
 
||
 
*
 
*
第105行: 第124行:
 
|Wan Lin
 
|Wan Lin
 
||
 
||
*
+
* Neural Scoring: musan exp & paper writing & reference survey
 
||
 
||
 
*
 
*
第140行: 第159行:
 
|Junhui Chen
 
|Junhui Chen
 
||
 
||
*
+
* Neural Scoring
 +
** Exp: musan test, transformer layers test
 +
** Paper writing
 
||
 
||
 
*
 
*
第168行: 第189行:
 
|Yu Zhang
 
|Yu Zhang
 
||
 
||
*
+
*TDNN 200k Training[https://z1et6d3xtb.feishu.cn/wiki/DvoYwcFfwiGMZskJbhkchYTEnBe?from=from_copylink]
 +
*R2_SAC still debugging
 
||
 
||
 
*
 
*
第191行: 第213行:
 
|Yang Wei
 
|Yang Wei
 
||
 
||
*
+
* AIBabel KWS
 +
** Train Uyghur and Kazakh model with updated FA data
 
||
 
||
 
*
 
*
第214行: 第237行:
 
|Turi
 
|Turi
 
||
 
||
*
+
* Data collection
 +
** Almost finished, 54K utterence (~100+hrs)
 +
* Experiment
 +
** Trained standard conformer on 65hrs of data using wenet. Loss normally decreases[https://uestc.feishu.cn/docx/Qpi1d1UoBolMv6xfOmIcpjbqnXf?from=from_copylink]
 +
 
 
||
 
||
*
+
* Add recently collected data and train
 
||
 
||
 
*
 
*
第232行: 第259行:
 
|Qi Qu
 
|Qi Qu
 
||
 
||
*  
+
* AED:
 +
** CED-based classifier: libs and android demo (running and collecting FAs).
 
||
 
||
 
*
 
*

2024年7月15日 (一) 10:51的最后版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • Presentation for Chongqing event
  • NMI revision submitted
Lantian Li
  • GPU status [1]
    • Rabbit05 is ready.
  • Projects
    • AED -> Miniaturization (TDNN, PARM 166K, AUC 0.93)
    • TSE -> Streaming test and quantization
    • VSR -> CNCVS3: 34h
    • ASV -> A new project
    • Finance -> Fix data preparing bug
  • Research
    • NeuralScoring: Exps done.
    • TSE: Reproducing DPRNN-IRA.
    • VSR-LLM: ICL exps.
    • SoundFilter
  • AI graph
    • Slides checking (23/50)
    • High school handbook (3/40)
  • High school handbook (15/40)
Ying Shi
  • Text enroll keyword spotting U-net
    • Current AUC(U-transformer-net): 0.98
    • Previous AUC(Normal transformer + keyword decoder): 0.94
  • More test for next HUAWEI project (TC-ASR)
    • aishell-clean-test KCS-CER: 9.34%
    • aishell-2mix-target-0dB-SNR KCS-CER: 16.36%
    • aishell-2mix-target-3dB-SNR KCS-CER:16.69%
    • FA test: 24h Yi Zhongtian audio:1065 times / 172800 times
  • Testing whether NLU can control FA
    • FA PPL on n-gram LM: 405.31
    • positive content on n-gram LM: 95.95
Zhenghai You
  • Speed perturbation & Utterance augment exp;DPRNN-IRA[2]
Junming Yuan
  • Start Hubert pretraining experiment using fairseq
    • fix some bug and start pretraining on our Libri-keyword corpus.
    • Beginner's Guide for training Hubert with fairseq:[3]
Xiaolou Li
  • VSP-LLM code(help by Zehua)
  • Llama Factory Fail
  • paper reading and report
Zehua Liu
  • ICL code
  • VSP-LLM reproduce success
  • read paper and share
Pengqi Li
  • Learned EA-ASP with the help of Tian Hao.
  • Researched various variants of attention pooling.
  • Designed a new attention pooling with condition.[4]
  • Test & Investigate
Wan Lin
  • Neural Scoring: musan exp & paper writing & reference survey
Tianhao Wang
  • Neural Scoring: musan exps
  • New task investigation: Sound Filter
  • project
Zhenyu Zhou
Junhui Chen
  • Neural Scoring
    • Exp: musan test, transformer layers test
    • Paper writing
Jiaying Wang
  • wsj dataset:
    • 2mix baseline(done)
    • 2mix + cohort:poor performance, need to test the EER of wsj dataset
  • librimix dataset:
    • 3mix baseline(done)
    • 3mix + cohort: overfit and loss>0 --> plan to see transformer attention map
  • use condition chain frame on cohort
Yu Zhang
  • TDNN 200k Training[5]
  • R2_SAC still debugging
Wenqiang Du
  • Continuously upgrade the model according to the plan
  • primary school handbook (7/46)
Yang Wei
  • AIBabel KWS
    • Train Uyghur and Kazakh model with updated FA data
Lily
  • Thesis
  • Daily Work at AIRadiance
    • Live Broadcast
    • Prepare Lenovo's Lecture
Turi
  • Data collection
    • Almost finished, 54K utterence (~100+hrs)
  • Experiment
    • Trained standard conformer on 65hrs of data using wenet. Loss normally decreases[6]
  • Add recently collected data and train
Yue Gu
  • fix bugs
  • read several recently published papers, then complete draft of the abstract and introduction. (6.5/9)
Qi Qu
  • AED:
    • CED-based classifier: libs and android demo (running and collecting FAs).