“2019-01-09”版本间的差异
来自cslt Wiki
(8位用户的15个中间修订版本未显示) | |||
第9行: | 第9行: | ||
|Yibo Liu | |Yibo Liu | ||
|| | || | ||
− | * | + | * Improved Iambics generation. |
+ | * Learned about GAN and VAE. | ||
|| | || | ||
− | * | + | * Try to implement VAE in the existing model. |
+ | * Try to implement BERT. | ||
+ | * Improve the quality of Iambics. | ||
|| | || | ||
* | * | ||
第22行: | 第25行: | ||
|Xiuqi Jiang | |Xiuqi Jiang | ||
|| | || | ||
− | * | + | * Went deeper into current model and discussed about the possibility of VAE |
+ | * Song iambics can now be generated and different styles of tunes are available | ||
|| | || | ||
− | * | + | * Polish up the iambics parts |
|| | || | ||
* | * | ||
第35行: | 第39行: | ||
|Jiayao Wu | |Jiayao Wu | ||
|| | || | ||
− | * | + | * finished the speech book -- [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/c5/Kws.pdf kws] |
+ | * doing experiment of node -pruning on WSJ chain model | ||
|| | || | ||
− | * | + | * keep on node-pruning research |
|| | || | ||
* | * | ||
第48行: | 第53行: | ||
|Zhaodi Qi | |Zhaodi Qi | ||
|| | || | ||
− | * | + | * Run LID (about Chinese English and Japanese) and Control configs_size within 20M,But it still running. |
+ | * Data segmentation completed, and Waiting for test. | ||
|| | || | ||
− | * | + | * Finish the speech book |
+ | * Completion the tdnn system performance comparison of test sets of different lengths | ||
|| | || | ||
* | * | ||
第61行: | 第68行: | ||
|Jiawei Yu | |Jiawei Yu | ||
|| | || | ||
− | * | + | * Done for max-margin recipe. |
+ | * write the speech book. | ||
+ | * learn to use Tensorflow for attention experiment. | ||
|| | || | ||
− | * | + | * finish the emotion recognition speech book. |
+ | * Keep learning the Tensorflow and move max-margin experiment to this platform. | ||
|| | || | ||
* | * | ||
第72行: | 第82行: | ||
|Yunqi Cai | |Yunqi Cai | ||
|| | || | ||
− | * | + | *Run wsj data; Read paper about RNN-LM |
+ | || | ||
+ | *Use RNN-LM to rescore the wsj data | ||
|| | || | ||
* | * | ||
− | |||
− | |||
|- | |- | ||
第83行: | 第93行: | ||
|Dan He | |Dan He | ||
|| | || | ||
− | * | + | * Compared the inference time, but the results is not well. |
+ | *After TT decomposing the two full-connection layers, it is found that the testing accuracy is very low. | ||
|| | || | ||
− | * | + | *According to some problems found in the experiment, continue to do comparative experiments and analyze the reasons. |
|| | || | ||
* | * | ||
第96行: | 第107行: | ||
|Yang Zhang | |Yang Zhang | ||
|| | || | ||
− | * | + | * Wrote a brief [https://github.com/zyzisyz/VPR-wx-client document] |
+ | * submited the source code to [https://gitlab.com/zyzisyz/nebula-listen GitLab]. | ||
|| | || | ||
− | * | + | * revise for my school subjects and prepare for the final-term examinations. |
|| | || | ||
* | * |
2019年1月9日 (三) 04:32的最后版本
People | Last Week | This Week | Task Tracking (DeadLine) |
---|---|---|---|
Yibo Liu |
|
|
|
Xiuqi Jiang |
|
|
|
Jiayao Wu |
|
|
|
Zhaodi Qi |
|
|
|
Jiawei Yu |
|
|
|
Yunqi Cai |
|
|
|
Dan He |
|
|
|
Yang Zhang |
|
|