Neural networks have attracted attention due to their capability to perform nonlinear function approximation. In this paper, in order to better understand this capability, a new theorem on an integral transform was derived by applying ridge functions to neural networks. From the theorem, it is possible to obtain approximation bounds which clarify the quantitative relationship between the function approximation accuracy and the number of nodes in the hidden layer. The theorem indicates that the approximation accuracy depends on the smoothness of the target function. Furthermore, the theorem also shows that this type of approximation method differs from usual methods and is able to escape the so-called "curse of dimensionality," in which the approximation accuracy depends strongly of the input dimension of the function and deteriorates exponentially.
|Electronics and Communications in Japan, Part III: Fundamental Electronic Science (English translation of Denshi Tsushin Gakkai Ronbunshi)
|Published - 1996 3月
ASJC Scopus subject areas