Abstract
Neural networks have attracted attention due to their capability to perform nonlinear function approximation. In this paper, in order to better understand this capability, a new theorem on an integral transform was derived by applying ridge functions to neural networks. From the theorem, it is possible to obtain approximation bounds which clarify the quantitative relationship between the function approximation accuracy and the number of nodes in the hidden layer. The theorem indicates that the approximation accuracy depends on the smoothness of the target function. Furthermore, the theorem also shows that this type of approximation method differs from usual methods and is able to escape the so-called "curse of dimensionality," in which the approximation accuracy depends strongly of the input dimension of the function and deteriorates exponentially.
Original language | English |
---|---|
Pages (from-to) | 23-33 |
Number of pages | 11 |
Journal | Electronics and Communications in Japan, Part III: Fundamental Electronic Science (English translation of Denshi Tsushin Gakkai Ronbunshi) |
Volume | 79 |
Issue number | 3 |
DOIs | |
Publication status | Published - 1996 Mar |
Externally published | Yes |
Keywords
- Curse of dimensionality
- Integral transform
- Neural networks
- Nonlinear function approximation
- Ridge functions
ASJC Scopus subject areas
- Electrical and Electronic Engineering