|
|
第159行: |
第159行: |
| |Junhui Chen | | |Junhui Chen |
| || | | || |
− | * | + | * Neural Scoring |
| + | ** Exp: musan test, transformer layers test |
| + | ** Paper writing |
| || | | || |
| * | | * |
People |
This Week |
Next Week |
Task Tracking (DeadLine)
|
Dong Wang
|
- Presentation for Chongqing event
- NMI revision submitted
|
|
|
Lantian Li
|
- GPU status [1]
- Projects
- AED -> Miniaturization (TDNN, PARM 166K, AUC 0.93)
- TSE -> Streaming test and quantization
- VSR -> CNCVS3: 34h
- ASV -> A new project
- Finance -> Fix data preparing bug
- Research
- NeuralScoring: Exps done.
- TSE: Reproducing DPRNN-IRA.
- VSR-LLM: ICL exps.
- SoundFilter
- AI graph
- Slides checking (23/50)
- High school handbook (3/40)
|
- High school handbook (15/40)
|
|
Ying Shi
|
- Text enroll keyword spotting U-net
- Current AUC(U-transformer-net): 0.98
- Previous AUC(Normal transformer + keyword decoder): 0.94
- More test for next HUAWEI project (TC-ASR)
- aishell-clean-test KCS-CER: 9.34%
- aishell-2mix-target-0dB-SNR KCS-CER: 16.36%
- aishell-2mix-target-3dB-SNR KCS-CER:16.69%
- FA test: 24h Yi Zhongtian audio:1065 times / 172800 times
- Testing whether NLU can control FA
- FA PPL on n-gram LM: 405.31
- positive content on n-gram LM: 95.95
|
|
|
Zhenghai You
|
|
|
|
Junming Yuan
|
- Start Hubert pretraining experiment using fairseq
- fix some bug and start pretraining on our Libri-keyword corpus.
- Beginner's Guide for training Hubert with fairseq:[2]
|
|
|
Xiaolou Li
|
- VSP-LLM code(help by Zehua)
- Llama Factory Fail
- paper reading and report
|
|
|
Zehua Liu
|
- ICL code
- VSP-LLM reproduce success
- read paper and share
|
|
|
Pengqi Li
|
- Learned EA-ASP with the help of Tian Hao.
- Researched various variants of attention pooling.
- Designed a new attention pooling with condition.[3]
|
|
|
Wan Lin
|
- Neural Scoring: musan exp & paper writing & reference survey
|
|
|
Tianhao Wang
|
- Neural Scoring: musan exps
- New task investigation: Sound Filter
- project
|
|
|
Zhenyu Zhou
|
|
|
|
Junhui Chen
|
- Neural Scoring
- Exp: musan test, transformer layers test
- Paper writing
|
|
|
Jiaying Wang
|
- wsj dataset:
- 2mix baseline(done)
- 2mix + cohort:poor performance, need to test the EER of wsj dataset
- librimix dataset:
- 3mix baseline(done)
- 3mix + cohort: overfit and loss>0 --> plan to see transformer attention map
|
- use condition chain frame on cohort
|
|
Yu Zhang
|
- TDNN 200k Training[4]
- R2_SAC still debugging
|
|
|
Wenqiang Du
|
- Continuously upgrade the model according to the plan
- primary school handbook (7/46)
|
|
|
Yang Wei
|
|
|
|
Lily
|
- Thesis
- Daily Work at AIRadiance
- Live Broadcast
- Prepare Lenovo's Lecture
|
|
|
Turi
|
- Data collection
- Almost finished, 54K utterence (~100+hrs)
- Experiment
- Trained standard conformer on 65hrs of data using wenet. Loss normally decreases[5]
|
- Add recently collected data and train
|
|
Yue Gu
|
- fix bugs
- read several recently published papers, then complete draft of the abstract and introduction. (6.5/9)
|
|
|
Qi Qu
|
- AED:
- CED-based classifier: libs and android demo (running and collecting FAs).
|
|
|