New method to prune the neural network

Weishui Wan*, Kotaro Hirasawa, Jinglu Hu, Chunzhi Jin

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

2 Citations (Scopus)

Abstract

Using backpropagation algorithm (BP) to train neural networks is a widely adopted practice in both theory and practical applications. But its distributed weight representation, that is the weight matrix of final network after training by using BP are usually not sparsified, and prohibits its use in the rule discovery of inherent functional relations between the input and output data, so in this aspect some kinds of structure optimization are needed to improve its poor performance. In this paper with this in mind a new method to prune neural networks is proposed based on some statistical quantities of neural networks. Comparing with the other known pruning methods such as structural learning with forgetting (SLF) and RPROP algorithm, the proposed method can attain comparable or even better results over these methods without evident increase of the computational load. Detailed simulations using the Iris data sets exhibit our above assertion.

Original languageEnglish
Pages449-454
Number of pages6
Publication statusPublished - 2000 Jan 1
Externally publishedYes
EventInternational Joint Conference on Neural Networks (IJCNN'2000) - Como, Italy
Duration: 2000 Jul 242000 Jul 27

Other

OtherInternational Joint Conference on Neural Networks (IJCNN'2000)
CityComo, Italy
Period00/7/2400/7/27

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'New method to prune the neural network'. Together they form a unique fingerprint.

Cite this