Theoretical Analysis of the Advantage of Deepening Neural Networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

We propose two new criteria to understand the advantage of deepening neural networks. It is important to know the expressivity of functions computable by deep neural networks in order to understand the advantage of deepening neural networks. Unless deep neural networks have enough expressivity, they cannot have good performance even though learning is successful. In this situation, the proposed criteria contribute to understanding the advantage of deepening neural networks since they can evaluate the expressivity independently from the efficiency of learning. The first criterion shows the approximation accuracy of deep neural networks to the target function. This criterion has the background that the goal of deep learning is approximating the target function by deep neural networks. The second criterion shows the property of linear regions of functions computable by deep neural networks. This criterion has the background that deep neural networks whose activation functions are piecewise linear are also piecewise linear. Furthermore, by the two criteria, we show that to increase layers is more effective than to increase units at each layer on improving the expressivity of deep neural networks.

Original languageEnglish
Title of host publicationProceedings - 19th IEEE International Conference on Machine Learning and Applications, ICMLA 2020
EditorsM. Arif Wani, Feng Luo, Xiaolin Li, Dejing Dou, Francesco Bonchi
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages479-484
Number of pages6
ISBN (Electronic)9781728184708
DOIs
Publication statusPublished - 2020 Dec
Event19th IEEE International Conference on Machine Learning and Applications, ICMLA 2020 - Virtual, Miami, United States
Duration: 2020 Dec 142020 Dec 17

Publication series

NameProceedings - 19th IEEE International Conference on Machine Learning and Applications, ICMLA 2020

Conference

Conference19th IEEE International Conference on Machine Learning and Applications, ICMLA 2020
Country/TerritoryUnited States
CityVirtual, Miami
Period20/12/1420/12/17

Keywords

  • approximation accuracy
  • deep learning theory
  • expressivity
  • linear region

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Theoretical Analysis of the Advantage of Deepening Neural Networks'. Together they form a unique fingerprint.

Cite this