|
|
第6行: |
第6行: |
| |Dong Wang | | |Dong Wang |
| || | | || |
− | * | + | * AI handbook check & polish (primary version) |
| + | |
| || | | || |
| * | | * |
People |
This Week |
Next Week |
Task Tracking (DeadLine)
|
Dong Wang
|
- AI handbook check & polish (primary version)
|
|
|
Lantian Li
|
- Complete 2025 AI calendar
|
|
|
Ying Shi
|
- Cosine-Guided Order VS Dominance Guided Order
- Token error rate and CTC-loss Confusion Matrix [1]
|
- Design Learnable order
- Think about how to use Condition ...
|
|
Zhenghai You
|
- Paper reading for weekly report
- Supplement the SPK-AUG experiment and plan to start writing the paper[2]
- Attempt to increase SSL loss on existing E2E-TSE{The current results are not good}
|
|
|
Junming Yuan
|
- MT-Hubert paper writing(1/2)
- Check the high school AI handbook(237/362)
|
|
|
Xiaolou Li
|
- Data process server code upgrade, data transfer
- Experiment on 1500h (in training...)
- Odds and ends...Paper reading, Coursework, Calendar delivery...
|
- Upgrade the AV-HuBERT training code and test it
|
|
Zehua Liu
|
- Train LRS3 exp for AlignVSR
- Iter Inference 7-times get result (43.88%) better than No Iter-Inference (45.74%)
- Use Weaker Encoder to generate corrupted text seems slightly better than before(43.88% < 44.54%)[3]
|
|
|
Pengqi Li
|
- Conduct a preliminary experiment for the proposal(IS25-XAI) (A clear doc must be produced by the end of this week.)
- Check the high school AI handbook(1/3)
|
|
|
Wan Lin
|
- NS: margin BCE loss & multi-enroll training
|
|
|
Tianhao Wang
|
- Try using attention instead of FiLM:
- Cross-Attention: validation loss: cross-attn: -8.065 vs. FiLM: -9.986
- Self-Attention: loss doesn‘t decrease
|
|
|
Xiaoxue Luo
|
- prepare for the final exam
|
|
|
Zhenyu Zhou
|
|
|
|
Junhui Chen
|
|
|
|
Jiaying Wang
|
|
|
|
Yu Zhang
|
- Multi policy pipeline building (Done Tech/Sentiment policy generation)
- Some copyright stuff in Royal Flush
|
|
|
Wenqiang Du
|
|
|
|
Yang Wei
|
- Prepare an ASR Java jar file and API doc for zhongchuan
|
|
|
Turi
|
- Prepared finetuning code for MMS
- MMS from meta outperforms whisper and they provide adapters for every language supported which include Oromo language.
- Server is busy now, plan to do experiment on MMS
- Preparing ppt for midterm defense
|
|
|
Yue Gu
|
- read several papers about synthetic data for personalized ASR and do some exps. plan to report them on this friday or next monday.
|
|
|
Qi Qu
|
- Quantization for NPU: metrics updated [4].
- CED + classifier used as VAD: speech detection during inactive hours in dormitories.
- QAT (Quantization-Aware Training) exploration.
|
|
|