“Flow-based Speech Analysis”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
Flow-based Speech Analysis
Flow-based Speech Analysis
第5行: 第5行:
  
 
===Introduction===
 
===Introduction===
* We present a preliminary investigation on unsupervised speech factorization based on the normalization flow model. This model constructs a complex invertible transform, by which we can project speech segments into a latent code space where the distribution is a simple diagonal Gaussian. Our preliminary investigation on the TIMIT database shows that this code space exhibits favorable properties such as denseness and pseudo linearity, and perceptually important factors such as phonetic content and speaker trait can be represented as particular directions within the code space.
+
* We present a preliminary investigation on unsupervised speech factorization based on the normalization flow model. This model constructs a complex invertible transform, by which we can project speech segments into a latent code space where the distribution is a simple diagonal Gaussian.  
 +
* Our preliminary investigation on the TIMIT database shows that this code space exhibits favorable properties such as denseness and pseudo linearity, and perceptually important factors such as phonetic content and speaker trait can be represented as particular directions within the code space.
 
* <b>Index Terms:</b> speech factorization, normalization flow, deep learning
 
* <b>Index Terms:</b> speech factorization, normalization flow, deep learning

2019年10月29日 (二) 02:05的版本

Flow-based Speech Analysis

  • Members: Dong Wang, Haoran Sun, Yunqi Cai, Lantian Li
  • Paper: Haoran Sun, Yunqi Cai, Lantian Li, Dong Wang, "On Investigation of Unsupervised Speech Factorization Based in Normalization Flow", 2019. link

Introduction

  • We present a preliminary investigation on unsupervised speech factorization based on the normalization flow model. This model constructs a complex invertible transform, by which we can project speech segments into a latent code space where the distribution is a simple diagonal Gaussian.
  • Our preliminary investigation on the TIMIT database shows that this code space exhibits favorable properties such as denseness and pseudo linearity, and perceptually important factors such as phonetic content and speaker trait can be represented as particular directions within the code space.
  • Index Terms: speech factorization, normalization flow, deep learning