TY - GEN
T1 - Random convolutional neural network based on distributed computing with decentralized architecture
AU - Xu, Yige
AU - Lu, Huijuan
AU - Ye, Minchao
AU - Yan, Ke
AU - Gao, Zhigang
AU - Jin, Qun
N1 - Funding Information:
Acknowledgments. This study is supported by National Natural Science Foundation of China (Nos. 61272315, 61602431, 61701468, 61572164, 61877015 and 61850410531), International Cooperation Project of Zhejiang Provincial Science and Technology Department (Nos. 2017C34003), the Project of Zhejiang Provincial Natural Science Foundation (LY19F020016), and the Project of Zhejiang Provincial Science and Technology Innovation Activities for College Students University (Nos. 2019R409030) and Student research project of China Jiliang university (2019X22030).
Publisher Copyright:
© Springer Nature Switzerland AG 2019.
PY - 2019
Y1 - 2019
N2 - In recent years, deep learning has made great progress in image classification and detection. Popular deep learning algorithms rely on deep networks and multiple rounds of back-propagations. In this paper, we propose two approaches to accelerate deep networks. One is expanding the width of every layer. We reference to the Extreme Learning Machine, setting big number of convolution kernels to extract features in parallel. It can obtain multiscale features and improve network efficiency. The other is freezing part of layers. It can reduce back-propagations and speed up the training procedure. From the above, it is a random convolution architecture that network is proposed for image classification. In our architecture, every combination of random convolutions extracts distinct features. Apparently, we need a lot of experiments to choose the best combination. However, centralized computing may limit the number of combinations. Therefore, a decentralized architecture is used to enable the use of multiple combinations.
AB - In recent years, deep learning has made great progress in image classification and detection. Popular deep learning algorithms rely on deep networks and multiple rounds of back-propagations. In this paper, we propose two approaches to accelerate deep networks. One is expanding the width of every layer. We reference to the Extreme Learning Machine, setting big number of convolution kernels to extract features in parallel. It can obtain multiscale features and improve network efficiency. The other is freezing part of layers. It can reduce back-propagations and speed up the training procedure. From the above, it is a random convolution architecture that network is proposed for image classification. In our architecture, every combination of random convolutions extracts distinct features. Apparently, we need a lot of experiments to choose the best combination. However, centralized computing may limit the number of combinations. Therefore, a decentralized architecture is used to enable the use of multiple combinations.
KW - Decentralized architecture
KW - Distributed computing
KW - Random convolution
UR - http://www.scopus.com/inward/record.url?scp=85081933575&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081933575&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-37429-7_50
DO - 10.1007/978-3-030-37429-7_50
M3 - Conference contribution
AN - SCOPUS:85081933575
SN - 9783030374280
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 504
EP - 510
BT - Human Centered Computing - 5th International Conference, HCC 2019, Revised Selected Papers
A2 - Miloševic, Danijela
A2 - Tang, Yong
A2 - Zu, Qiaohong
PB - Springer
T2 - 5th International Conference on Human Centered Computing, HCC 2019
Y2 - 5 August 2019 through 7 August 2019
ER -