FastMVAE: A fast optimization algorithm for the multichannel variational autoencoder method

Li Li, Hirokazu Kameoka, Shota Inoue, Shoji Makino

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

This paper proposes a fast optimization algorithm for the multichannel variational autoencoder (MVAE) method, a recently proposed powerful multichannel source separation technique. The MVAE method can achieve good source separation performance thanks to a convergence-guaranteed optimization algorithm and the idea of jointly performing multi-speaker separation and speaker identification. However, one drawback is the high computational cost of the optimization algorithm. To overcome this drawback, this paper proposes using an auxiliary classifier VAE, an information-theoretic extension of the conditional VAE (CVAE), to train the generative model of the source spectrograms and using it to efficiently update the parameters of the source spectrogram models at each iteration of the source separation algorithm. We call the proposed algorithm “FastMVAE” (or fMVAE for short). Experimental evaluations revealed that the proposed fast algorithm can achieve high source separation performance in both speaker-dependent and speaker-independent scenarios while significantly reducing the computational time compared to the original MVAE method by more than 90% on both GPU and CPU. However, there is still room for improvement of about 3 dB compared to the original MVAE method.

Original languageEnglish
JournalIEEE Access
DOIs
Publication statusAccepted/In press - 2020
Externally publishedYes

Keywords

  • Multichannel source separation
  • auxiliary classifier VAE
  • fast algorithm
  • multichannel variational autoencoder (MVAE)

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Fingerprint

Dive into the research topics of 'FastMVAE: A fast optimization algorithm for the multichannel variational autoencoder method'. Together they form a unique fingerprint.

Cite this