“2024-11-04”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(16位用户的18个中间修订版本未显示)
第6行: 第6行:
 
|Dong Wang
 
|Dong Wang
 
||
 
||
*
+
* AI Medical sector 2 chapters done
 
||
 
||
 
*
 
*
第17行: 第17行:
 
|Lantian Li
 
|Lantian Li
 
||
 
||
*
+
* Submit three papers supporting ICCIP 2024.
 +
* Go on designing 2025 AI daily posts
 +
* Attend CSTR 40th anniversary
 
||
 
||
 
*
 
*
第28行: 第30行:
 
|Ying Shi
 
|Ying Shi
 
||
 
||
*  
+
* Stop strategy for Cohort Overlap ASR [https://z1et6d3xtb.feishu.cn/docx/KKVHd8EDWoTkjexNhG5cbAWunfb?from=from_copylink here]
 
||
 
||
 
*
 
*
第39行: 第41行:
 
|Zhenghai You
 
|Zhenghai You
 
||
 
||
*
+
* Huawei project (Unsuccessful IRA) [https://z1et6d3xtb.feishu.cn/docx/RnHLdHO0jobr8uxajiQcvZx6nyc]
 +
* Summarize SPK-AUG experiments[https://z1et6d3xtb.feishu.cn/docx/IiNhdn9xroVlomxdKmxc8B3Cnqe]
 
||
 
||
 
*
 
*
第49行: 第52行:
 
|Junming Yuan
 
|Junming Yuan
 
||
 
||
*
+
* paper reading
 +
* prepare to reproduce cocktail HuBERT (in progress)
 
||
 
||
 
*
 
*
第71行: 第75行:
 
|Xiaolou Li
 
|Xiaolou Li
 
||
 
||
*
+
* Debug the Chinese VTS (in training already)
 +
* Process the data of CVS3
 +
* Write the report of VTS project (main work)
 
||
 
||
 
*
 
*
第82行: 第88行:
 
|Zehua Liu
 
|Zehua Liu
 
||
 
||
*
+
*In-Context-Learning(if sentence is very long,context seems fail)still finding reason
 +
** (context<30s)45.30% | 44.69% (context = 30s) | 46.02%(context = 120s)
 +
*Writing VTS project document
 
||
 
||
 
*
 
*
第93行: 第101行:
 
|Pengqi Li
 
|Pengqi Li
 
||
 
||
*
+
* New Process of consistency of TAO and LayerCAM.[https://z1et6d3xtb.feishu.cn/docx/VtlIdFxdRodp8Nx8oQjcVLC4nCd]
 
||
 
||
 
*
 
*
第104行: 第112行:
 
|Wan Lin
 
|Wan Lin
 
||
 
||
*
+
* NS:detection (edit code with Chen)
 +
** EER of the model decrease faster in the previous epochs
 
||
 
||
 
*
 
*
第114行: 第123行:
 
|-
 
|-
 
|Tianhao Wang
 
|Tianhao Wang
 +
||
 +
* investigating some new approach for target sound separation
 +
* prepare the code for LoRA tuned CLAP
 
||
 
||
 
*
 
*
 +
||
 +
*
 +
|-
 +
 +
|-
 +
|Xiaoxue Luo
 +
||
 +
* prepare the report
 
||
 
||
 
*
 
*
第126行: 第146行:
 
|Zhenyu Zhou
 
|Zhenyu Zhou
 
||
 
||
*
+
*Reproduction:conditional TasNet [https://z1et6d3xtb.feishu.cn/docx/D2UQdxMBvojkF9xCXGfcFBLGned]
 
||
 
||
 
*
 
*
第137行: 第157行:
 
|Junhui Chen
 
|Junhui Chen
 
||
 
||
*
+
* NS with frame-level detection loss
 +
** use silero-vad
 +
** Model is training, seems EER decrease faster.
 
||
 
||
 
*
 
*
第159行: 第181行:
 
|Yu Zhang
 
|Yu Zhang
 
||
 
||
*
+
* SocioDojo
 +
** with cash ratio risk aware, and change information sources, seems have a decent risk control over Nasdaq 100 index [https://z1et6d3xtb.feishu.cn/docx/E6JHdDItzoUe1Ix7Bkfcp435n5b?from=from_copylink]
 +
* Some paper reading and report in RoyalFlush, get some idea (mainly about LLM for time series task)
 
||
 
||
 
*
 
*
第170行: 第194行:
 
|Wenqiang Du
 
|Wenqiang Du
 
||
 
||
*
+
*Training of New Dialect Models(Yi language )
 
||
 
||
 
*
 
*
第181行: 第205行:
 
|Yang Wei
 
|Yang Wei
 
||
 
||
*
+
* Write text enroll KWS model document.
 +
* Prepare data and code for Aibabel data finetuning.
 
||
 
||
 
*
 
*
第201行: 第226行:
 
|Turi
 
|Turi
 
||
 
||
*
+
* LoRA finetuning (Result is not good)
 +
* Data cleaning
 
||
 
||
 
*
 
*
第219行: 第245行:
 
|Qi Qu
 
|Qi Qu
 
||
 
||
*  
+
* KWS:
 +
** Yi (Liangshan, Sichuan) dataset prepared for training; dataset to be annotated for testing.
 +
** Experiments on model quantization for NPU devices: i16 quantization arrives at a balance between accuracy and efficiency (~2ms per inference, compared to ~250ms for non-quantized); more calibration data needed for further confirmation.
 +
** Full-featured demo (recording + feature extraction + model inference) for NPU devices in development.
 
||
 
||
 
*
 
*

2024年11月4日 (一) 10:58的最后版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • AI Medical sector 2 chapters done
Lantian Li
  • Submit three papers supporting ICCIP 2024.
  • Go on designing 2025 AI daily posts
  • Attend CSTR 40th anniversary
Ying Shi
  • Stop strategy for Cohort Overlap ASR here
Zhenghai You
  • Huawei project (Unsuccessful IRA) [1]
  • Summarize SPK-AUG experiments[2]
Junming Yuan
  • paper reading
  • prepare to reproduce cocktail HuBERT (in progress)
Chen Chen
Xiaolou Li
  • Debug the Chinese VTS (in training already)
  • Process the data of CVS3
  • Write the report of VTS project (main work)
Zehua Liu
  • In-Context-Learning(if sentence is very long,context seems fail)still finding reason
    • (context<30s)45.30% | 44.69% (context = 30s) | 46.02%(context = 120s)
  • Writing VTS project document
Pengqi Li
  • New Process of consistency of TAO and LayerCAM.[3]
Wan Lin
  • NS:detection (edit code with Chen)
    • EER of the model decrease faster in the previous epochs
Tianhao Wang
  • investigating some new approach for target sound separation
  • prepare the code for LoRA tuned CLAP
Xiaoxue Luo
  • prepare the report
Zhenyu Zhou
  • Reproduction:conditional TasNet [4]
Junhui Chen
  • NS with frame-level detection loss
    • use silero-vad
    • Model is training, seems EER decrease faster.
Jiaying Wang
Yu Zhang
  • SocioDojo
    • with cash ratio risk aware, and change information sources, seems have a decent risk control over Nasdaq 100 index [5]
  • Some paper reading and report in RoyalFlush, get some idea (mainly about LLM for time series task)
Wenqiang Du
  • Training of New Dialect Models(Yi language )
Yang Wei
  • Write text enroll KWS model document.
  • Prepare data and code for Aibabel data finetuning.
Lily
Turi
  • LoRA finetuning (Result is not good)
  • Data cleaning
Yue Gu
  • read several paper about speech tokenizer. I want to design a encoder, which processes different size feature frame and construct several different codebooks, to extract personality from the varing speech speed. It is still in progress.
  • paper writing
Qi Qu
  • KWS:
    • Yi (Liangshan, Sichuan) dataset prepared for training; dataset to be annotated for testing.
    • Experiments on model quantization for NPU devices: i16 quantization arrives at a balance between accuracy and efficiency (~2ms per inference, compared to ~250ms for non-quantized); more calibration data needed for further confirmation.
    • Full-featured demo (recording + feature extraction + model inference) for NPU devices in development.