“Deep Speech Factorization-2”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
ML
ML
第55行: 第55行:
 
===ML===
 
===ML===
  
# Goodfellow et al., "Bengio. Generative adversarial nets. In Advances in neural information processing systems,", 2014
+
# Goodfellow et al., "Bengio. Generative adversarial nets", 2014.
 
# Kingma, et al., "Auto-encoding variational Bayes". 2014.
 
# Kingma, et al., "Auto-encoding variational Bayes". 2014.
# Rezende, Variational Inference with Normalizing Flows, 2016.
+
# Danilo Jimenez Rezende et al., "Variational Inference with Normalizing Flows", 2016.
# Kingma et al., "Improving Variational Inference with Inverse Autoregressive Flow", 2016
+
# Kingma et al., "Improving Variational Inference with Inverse Autoregressive Flow", 2016.
# Danilo Jimenez Rezende et al., "Variational Inference with Normalizing Flows", 2016
+
# Zhu et al., "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks", 2017.
# Zhu et al., "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks", 2017
+
# Chen et al., "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", 2016.
# Chen et al., "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", 2016
+
# Hu et al., "On unifying deep generative models", 2017.
# Hu et al., "On unifying deep generative models", 2017
+
# Makhzani, "Adversarial Autoencoders", 2015.
# Makhzani, "Adversarial Autoencoders", 2015
+
# Oord, "Neural Discrete Representation Learning", 2017.
# Oord, "Neural Discrete Representation Learning", 2017
+
  
 
===ASR===
 
===ASR===

2019年7月27日 (六) 08:10的版本

Introduction

Speech signals involve complex factors, each contributing in an unknown and secrete way. Recent developed deep learning methods have built up some interesting tools for discovering these latent factors. These tools include various unsupervised models such as VAE, GAN, supervised learning methods such as multi-task learning, knowledge distillation, etc. These tools allow us to decipher secretes of speech signal, based on big data, rather than hypothesis.

These will lead to an unprecedented breakthrough in speech information processing. Some of the signals for this breakthrough includes:

  • In speaker recognition, speaker factors can be learned within a very small speech segment.
  • In speech synthesis, speaking styles can be learned as latent variables and discovered in an unsupervised way, and speaker factors can be used to change the speaker trait.
  • In speech recognition, learning multiple tasks in a collaborative way has shown to be successful.

In previous studies (Phase 1), we have found that using cascade learning, speech signals can be factorized into content, speaker and emotion at the frame level. In this Phase 2, we will try to answer the following questions:

  • Can we factorize speech signals in an unsupervised way?
  • How supervised and unsupervised factorizations are integrated?
  • How to deal with language discrepancy in factorization?
  • How to discover optimal factorization architectures?

People

Dong Wang, Yunqi Cai, Haoran Sun, Zhiyuan Tang, Lantian Li

Research direction

Basic research

  • Collaborative learning with AutoML
  • VAE/dVAE factorization
  • Supervised VAE for factorization
  • ASR + TTS cycle training

Applied reseach

  • Pretraining for ASR, SID, EMD (BERT in speech)
  • Low-resource ASR, TTS
  • Signal compression, cleaning up, etc.


Related publications

  1. Yang Zhang and Lantian Li and Dong Wang, "VAE-based regularization for deep speaker embedding", Interspeech 2019
  2. Lantian Li, Yixiang Chen, Ying Shi, Zhiyuan Tang, and Dong Wang, “Deep speaker feature learning for text-independent speaker verification,”, Interspeech 2017.
  3. Lantian Li, Dong Wang, Yixiang Chen, Ying Shing, Zhiyuan Tang, http://wangd.cslt.org/public/pdf/spkfact.pdf
  4. Lantian Li, Zhiyuan Tang, Dong Wang, FULL-INFO TRAINING FOR DEEP SPEAKER FEATURE LEARNING, http://wangd.cslt.org/public/pdf/mlspk.pdf
  5. Zhiyuan Thang, Lantian Li, Dong Wang, Ravi Vipperla "Collaborative Joint Training with Multi-task Recurrent Model for Speech and Speaker Recognition", IEEE Trans. on Audio, Speech and Language Processing, vol. 25, no.3, March 2017.
  6. Dong Wang,Lantian Li,Ying Shi,Yixiang Chen,Zhiyuan Tang., "Deep Factorization for Speech Signal", https://arxiv.org/abs/1706.01777

Further reading

Linguistics

Hiroya Fujisaki, “Prosody, models, and spontaneous speech,” in Computing prosody, pp. 27–42. Springer,1997.

ML

  1. Goodfellow et al., "Bengio. Generative adversarial nets", 2014.
  2. Kingma, et al., "Auto-encoding variational Bayes". 2014.
  3. Danilo Jimenez Rezende et al., "Variational Inference with Normalizing Flows", 2016.
  4. Kingma et al., "Improving Variational Inference with Inverse Autoregressive Flow", 2016.
  5. Zhu et al., "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks", 2017.
  6. Chen et al., "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", 2016.
  7. Hu et al., "On unifying deep generative models", 2017.
  8. Makhzani, "Adversarial Autoencoders", 2015.
  9. Oord, "Neural Discrete Representation Learning", 2017.

ASR

  1. Chan et al., "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", 2016.
  2. Prabhavalkar et al., "A Comparison of Sequence-to-Sequence Models for Speech Recognition", 2017.
  3. Chiu et al., "State-of-the-art Speech Recognition With Sequence-to-Sequence Models", 2018.
  4. Pratap, "wav2letter++: The Fastest Open-source Speech Recognition System", 2018
  5. Ren et al., "Almost Unsupervised Text to Speech and Automatic Speech Recognition", 2019
  6. Tsai et al., "Learning Factorized Multimodal Representations", 2019

SID

  1. E. Variani, X. Lei, E. McDermott, I. Lopez Moreno, and J. Gonzalez-Dominguez, “Deep neural networks for small footprint text-dependent speaker verification,”2014.
  2. G. Heigold, I. Moreno, S. Bengio, and N. Shazeer, “End-to-end textdependent speaker verification,” 2016.


TTS

  1. Wang, et al., "Tacotron: A fully end-to-end text-to-speech synthesis model." CoRR, abs/1703.10135, 2017.
  2. van den Oord, et al., "Parallel WaveNet: Fast high-fidelity speech synthesis.", CoRR, abs/1711.10433, 2017.
  3. van den Oord, et al., "WaveNet: A generative model for raw audio". CoRR, abs/1609.03499, 2016a
  4. Nal Kalchbrenner et al., "Efficient Neural Audio Synthesis", 2018 (WaveRNN)
  5. Hsu et al., "Disentangling Correlated Speaker and Noise for Speech Synthesis via Data Augmentation and Adversarial Factorization", NIPS 2018.


Tools