“2021-12-13”版本间的差异
来自cslt Wiki
Jianghaoyu(讨论 | 贡献) |
|||
| (10位用户的15个中间修订版本未显示) | |||
| 第5行: | 第5行: | ||
|Dong Wang | |Dong Wang | ||
|| | || | ||
| − | * | + | * Spoof paper refined |
| + | * Start the hard trials paper | ||
|| | || | ||
| − | * | + | * Hard trials paper |
|| | || | ||
* | * | ||
| 第16行: | 第17行: | ||
|Yunqi Cai | |Yunqi Cai | ||
|| | || | ||
| − | * | + | * img fusion network construction |
| + | * infra experiments plan for interns | ||
| + | * bayesian optimization paper review | ||
|| | || | ||
* | * | ||
| 第27行: | 第30行: | ||
|Lantian Li | |Lantian Li | ||
|| | || | ||
| − | * | + | * Refine AI course v2. |
| + | * Check spoof paper. | ||
| + | * Finish my defences. | ||
|| | || | ||
| − | * | + | * Finish ETM response. |
| + | * Exps of hard trials. | ||
|| | || | ||
* | * | ||
| 第38行: | 第44行: | ||
|Ying Shi | |Ying Shi | ||
|| | || | ||
| − | * | + | * Test fncmd and speech engrave on huawei_cross_channel data [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/8/82/Speech_engrave_fncmd_huawei_cross.png here] |
|| | || | ||
| − | * | + | * Retrain speech engrave model(make speech engrave and fncmd are Comparable on far field test set) |
| + | ** Huawei cross channel data | ||
| + | ** Score margin | ||
| + | ** Discriminative training | ||
| + | * Retrain fncmd model with huawei data. | ||
|| | || | ||
* | * | ||
| 第49行: | 第59行: | ||
|Haoran Sun | |Haoran Sun | ||
|| | || | ||
| − | * | + | * some analysis on c-vector |
| + | * training processing of c-vector | ||
|| | || | ||
| − | * | + | * remove f0 decoder of c-vector |
| + | * a easier model with only content and speaker encoders based on long-short term assumption | ||
|| | || | ||
* | * | ||
| 第60行: | 第72行: | ||
|Chen Chen | |Chen Chen | ||
|| | || | ||
| − | * | + | * perform kmeans and pca on wav2vec result |
| + | * check GAN | ||
|| | || | ||
| − | * | + | * fix bug of uasr_model |
|| | || | ||
* | * | ||
| 第75行: | 第88行: | ||
|| | || | ||
* more experiment and analysis on this method | * more experiment and analysis on this method | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
|| | || | ||
* | * | ||
| 第105行: | 第108行: | ||
|Zixi Yan | |Zixi Yan | ||
|| | || | ||
| − | * | + | * Fine-tune the wav2vec model on dev-other |
| + | * Test the effect of Tibetan adjusted model | ||
|| | || | ||
* | * | ||
| 第115行: | 第119行: | ||
|Sirui Li | |Sirui Li | ||
|| | || | ||
| − | * | + | * Compare the effects of TIMIT and Tibetan fine-tune |
|| | || | ||
| − | * | + | * More comparative experiments |
|| | || | ||
* | * | ||
| 第126行: | 第130行: | ||
|Haoyu Jiang | |Haoyu Jiang | ||
|| | || | ||
| − | * | + | * Resampling the data |
|| | || | ||
| − | * | + | * Set thresholds to divide data |
| + | * Check the sampled images | ||
|| | || | ||
* | * | ||
|- | |- | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
|- | |- | ||
|Renmiao Chen | |Renmiao Chen | ||
|| | || | ||
| − | * | + | * choose thresholds for dividing high-confident data, mid-confident data, low-confident data. |
| + | * check the thresholds. | ||
| + | * use speechbrain to do IDR task. | ||
|| | || | ||
| − | * | + | * do more task with the data. |
| + | * finish the report. | ||
|| | || | ||
* | * | ||
2021年12月13日 (一) 11:24的最后版本
| People | This Week | Next Week | Task Tracking (DeadLine) |
|---|---|---|---|
| Dong Wang |
|
|
|
| Yunqi Cai |
|
|
|
| Lantian Li |
|
|
|
| Ying Shi |
|
|
|
| Haoran Sun |
|
|
|
| Chen Chen |
|
|
|
| Pengqi Li |
|
|
|
| Weida Liang |
|
|
|
| Zixi Yan |
|
|
|
| Sirui Li |
|
|
|
| Haoyu Jiang |
|
|
|
| Renmiao Chen |
|
|
|