“Flow-based Speech Analysis”版本间的差异
来自cslt Wiki
(→Flow-based Speech Analysis) |
(→Flow-based Speech Analysis) |
||
第9行: | 第9行: | ||
* Our preliminary investigation on the TIMIT database shows that this code space exhibits favorable properties such as denseness and pseudo linearity, and perceptually important factors such as phonetic content and speaker trait can be represented as particular directions within the code space. | * Our preliminary investigation on the TIMIT database shows that this code space exhibits favorable properties such as denseness and pseudo linearity, and perceptually important factors such as phonetic content and speaker trait can be represented as particular directions within the code space. | ||
* <b>Index Terms:</b> speech factorization, normalization flow, deep learning | * <b>Index Terms:</b> speech factorization, normalization flow, deep learning | ||
+ | |||
+ | ===Experimental Results=== | ||
+ | * Sample:[[Fig3_flow.jpg]] |
2019年10月29日 (二) 03:01的版本
Flow-based Speech Analysis
- Members: Dong Wang, Haoran Sun, Yunqi Cai, Lantian Li
- Paper: Haoran Sun, Yunqi Cai, Lantian Li, Dong Wang, "On Investigation of Unsupervised Speech Factorization Based in Normalization Flow", 2019. link
- original codes we used: the pytorch version of glow model by Yuki-Chai. glow-pytorch
Introduction
- We present a preliminary investigation on unsupervised speech factorization based on the normalization flow model. This model constructs a complex invertible transform, by which we can project speech segments into a latent code space where the distribution is a simple diagonal Gaussian.
- Our preliminary investigation on the TIMIT database shows that this code space exhibits favorable properties such as denseness and pseudo linearity, and perceptually important factors such as phonetic content and speaker trait can be represented as particular directions within the code space.
- Index Terms: speech factorization, normalization flow, deep learning
Experimental Results
- Sample:Fig3_flow.jpg