|
|
| 第217行: |
第217行: |
| | |Turi | | |Turi |
| | || | | || |
| − | * | + | * LoRA finetuning (Result is not good) |
| | + | * Data cleaning |
| | || | | || |
| | * | | * |
| People |
This Week |
Next Week |
Task Tracking (DeadLine)
|
| Dong Wang
|
|
|
|
| Lantian Li
|
|
|
|
| Ying Shi
|
|
|
|
| Zhenghai You
|
|
|
|
| Junming Yuan
|
- paper reading
- prepare to reproduce cocktail HuBERT (in progress)
|
|
|
| Chen Chen
|
|
|
|
| Xiaolou Li
|
- Debug the Chinese VTS (in training already)
- Write the report of VTS project (main work)
|
|
|
| Zehua Liu
|
|
|
|
| Pengqi Li
|
|
|
|
| Wan Lin
|
|
|
|
| Tianhao Wang
|
|
|
|
| Xiaoxue Luo
|
|
|
|
| Zhenyu Zhou
|
|
|
|
| Junhui Chen
|
- NS with frame-level detection loss
- use silero-vad
- Model is training, seems EER decrease faster.
|
|
|
| Jiaying Wang
|
|
|
|
| Yu Zhang
|
- SocioDojo
- with cash ratio risk aware, and change information sources, seems have a decent risk control over Nasdaq 100 index [1]
- Some paper reading and report in RoyalFlush, get some idea (mainly about LLM for time series task)
|
|
|
| Wenqiang Du
|
|
|
|
| Yang Wei
|
|
|
|
| Lily
|
|
|
|
| Turi
|
- LoRA finetuning (Result is not good)
- Data cleaning
|
|
|
| Yue Gu
|
- read several paper about speech tokenizer. I want to design a encoder, which processes different size feature frame and construct several different codebooks, to extract personality from the varing speech speed. It is still in progress.
- paper writing
|
|
|
| Qi Qu
|
- KWS:
- Yi (Liangshan, Sichuan) dataset prepared for training; dataset to be annotated for testing.
- Experiments on model quantization for NPU devices: i16 quantization arrives at a balance between accuracy and efficiency (~2ms per inference, compared to ~250ms for non-quantized); more calibration data needed for further confirmation.
- Full-featured demo (recording + feature extraction + model inference) for NPU devices in development.
|
|
|