Constructing infinite deep neural networks with flexible expressiveness while training

Zhengbo Luo*, Zitang Sun, Weilian Zhou, Zizhang Wu, Sei ichiro Kamata

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

The depth of the deep neural network (DNN) refers to the number of hidden layers between the input and output layers of an artificial neural network. It usually indicates a certain degree of complexity of the computational cost (parameters and floating point operations per second) and expressiveness once the network structure is settled. In this study, we experimentally investigate the effectiveness of using neural ordinary differential equations (NODEs) as a component to provide further depth in a continuous way to relatively shallower networks rather than stacking more layers (discrete depth), which achieved an improvement with fewer parameters. Experiments are conducted on classic DNNs, the residual networks. Moreover, we construct infinite deep neural networks with flexible complexity based on NODEs, enabling the system to adjust its complexity during training. On a better hidden-space provided by adaptive step DNNs, adaptive step ResNet with NODE (ResODE) is managed to achieve better performances in terms of convergence and accuracy than standard networks, and the improvements are widely observed in popular benchmarks.

Original languageEnglish
Pages (from-to)257-268
Number of pages12
JournalNeurocomputing
Volume487
DOIs
Publication statusPublished - 2022 May 28

Keywords

  • Deep neural networks
  • Image processing
  • Neural ordinary differential equations

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Constructing infinite deep neural networks with flexible expressiveness while training'. Together they form a unique fingerprint.

Cite this