“2025-10-27”版本间的差异
来自cslt Wiki
| (10位用户的10个中间修订版本未显示) | |||
| 第6行: | 第6行: | ||
|Dong Wang | |Dong Wang | ||
|| | || | ||
| − | * | + | |
| + | * AI handbook for primary schools, 4th grad. | ||
| + | * Talk in middle school of Renmin Middle School. | ||
|| | || | ||
| 第33行: | 第35行: | ||
|Ying Shi | |Ying Shi | ||
|| | || | ||
| − | * | + | * prepare for interview |
|| | || | ||
* | * | ||
| 第68行: | 第70行: | ||
|Xiaolou Li | |Xiaolou Li | ||
|| | || | ||
| − | * | + | * Mid-term framework and report |
|| | || | ||
* | * | ||
| 第79行: | 第81行: | ||
|Zehua Liu | |Zehua Liu | ||
|| | || | ||
| − | * | + | *Mid-term framework and report |
|| | || | ||
* | * | ||
| 第104行: | 第106行: | ||
|Wan Lin | |Wan Lin | ||
|| | || | ||
| − | * | + | * Phd application |
| + | * MD task | ||
| + | ** Reproduce previous framework in shiyin's code: work normally | ||
| + | ** Prepare for similar phoneme pronunciation replacement | ||
| + | * NC's report | ||
|| | || | ||
* | * | ||
| 第115行: | 第121行: | ||
|Tianhao Wang | |Tianhao Wang | ||
|| | || | ||
| − | * | + | * mid-term framework & report |
|| | || | ||
* | * | ||
| 第126行: | 第132行: | ||
|Xiaoxue Luo | |Xiaoxue Luo | ||
|| | || | ||
| − | * | + | * train of attractor-based speech separation |
| + | ** the training loss is normal, but the training is too slow. I plan to change the current code from Chainer version to PyTorch version | ||
| + | * Proposal framework and report | ||
|| | || | ||
* | * | ||
| 第140行: | 第148行: | ||
** finish Baseline & Reflexion metric collection (without analysis yet) | ** finish Baseline & Reflexion metric collection (without analysis yet) | ||
** continue reading papers from COLM & NIPS 2025 (9/26) | ** continue reading papers from COLM & NIPS 2025 (9/26) | ||
| − | ** proposal discussing | + | ** proposal discussing with @ZhangYu |
|| | || | ||
* | * | ||
| 第151行: | 第159行: | ||
|Jiaying Wang | |Jiaying Wang | ||
|| | || | ||
| − | * | + | * A Loudness-Perceptual-Biased Speech Separation Method paper modification |
| + | * mid-term report | ||
|| | || | ||
* | * | ||
| 第191行: | 第200行: | ||
|Yang Wei | |Yang Wei | ||
|| | || | ||
| − | * | + | * Reproduce shiying's experiment on LibriSpeech (train/test: random data aug; ROC AUC 0.99) |
| + | * Training with other data aug method: replace a word with similar pronunciation from lexicon. (under training) | ||
|| | || | ||
* | * | ||
| 第202行: | 第212行: | ||
|Yue Gu | |Yue Gu | ||
|| | || | ||
| − | * | + | * job hunting and Phd thesis |
| + | * Mispronunciation task: 1 use ChatGPT to generate the negtive samples with similar pronunciation, 2 compute the similar matrix between speech embeddings and phone embeddings, then conduct dynamic programming algorithm on it to find the maximum cumulative-similarity path. The mispronunciations will corresponding to a lower similar scores. | ||
|| | || | ||
* | * | ||
2025年10月27日 (一) 10:57的最后版本
| People | This Week | Next Week | Task Tracking (DeadLine) |
|---|---|---|---|
| Dong Wang |
|
|
|
| Lantian Li |
|
|
|
| Ying Shi |
|
|
|
| Zhenghai You |
|
|
|
| Junming Yuan |
|
|
|
| Xiaolou Li |
|
|
|
| Zehua Liu |
|
|
|
| Pengqi Li |
|
|
|
| Wan Lin |
|
|
|
| Tianhao Wang |
|
|
|
| Xiaoxue Luo |
|
|
|
| Junhui Chen |
|
|
|
| Jiaying Wang |
|
|
|
| Yu Zhang |
|
|
|
| Wenqiang Du |
|
|
|
| Yang Wei |
|
|
|
| Yue Gu |
|
|
|
| Qi Qu |
|
|
|
| Lily |
|
|
|