Speech coding based on a multi-layer neural network

Shigeo Morishima*, Hiroshi Harashima, Yasuo Katayama

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

5 Citations (Scopus)

Abstract

The authors present a speech-compression scheme based on a three-layer perceptron in which the number of units in the hidden layer is reduced. Input and output layers have the same number of units in order to achieve identity mapping. Speech coding is realized by scalar or vector quantization of hidden-layer outputs. By analyzing the weighting coefficients, it can be shown that speech coding based on a three-layer neural network is speaker-independent. Transform coding is automatically based on back propagation. The relation between compression ratio and SNR (signal-to-noise ratio) is investigated. The bit allocation and optimum number of hidden-layer units necessary to realize a specific bit rate are given. According to the analysis of weighting coefficients, speech coding based on a neural network is transform coding similar to Karhunen-Loeve transformation. The characteristics of a five-layer neural network are examined. It is shown that since the five-layer neural network can realize nonlinear mapping, it is more effective than the three-layer network.

Original languageEnglish
Pages (from-to)429-433
Number of pages5
JournalConference Record - International Conference on Communications
Volume2
Publication statusPublished - 1990 Dec 1
Externally publishedYes
EventIEEE International Conference on Communications - ICC '90 Part 2 (of 4) - Atlanta, GA, USA
Duration: 1990 Apr 161990 Apr 19

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Speech coding based on a multi-layer neural network'. Together they form a unique fingerprint.

Cite this