TY - JOUR
T1 - Edge-Enabled Two-Stage Scheduling Based on Deep Reinforcement Learning for Internet of Everything
AU - Zhou, Xiaokang
AU - Liang, Wei
AU - Yan, Ke
AU - Li, Weimin
AU - Wang, Kevin I.Kai
AU - Ma, Jianhua
AU - Jin, Qun
N1 - Funding Information:
This work was supported in part by the National Key Research and Development Program of China under Grant 2019YFB1705200; in part by the National Natural Science Foundation of China under Grant 62072171 and Grant 72091515; in part by the Key Research and Development Program of Hunan Province of China under Grant 2020SK2089; and in part by the Open Fund of Key Laboratory of Hunan Province under Grant 2017TP1026.
Publisher Copyright:
© 2014 IEEE.
PY - 2023/2/15
Y1 - 2023/2/15
N2 - Nowadays, the concept of Internet of Everything (IoE) is becoming a hotly discussed topic, which is playing an increasingly indispensable role in modern intelligent applications. These applications are known for their real-time requirements under limited network and computing resources, thus it becomes a highly demanding task to transform and compute tremendous amount of raw data in a cloud center. The edge-cloud computing infrastructure allows a large amount of data to be processed on nearby edge nodes and then only the extracted and encrypted key features are transmitted to the data center. This offers the potential to achieve an end-edge-cloud-based big data intelligence for IoE in a typical two-stage data processing scheme, while satisfying a data security constraint. In this study, a deep-reinforcement-learning-enhanced two-stage scheduling (DRL-TSS) model is proposed to address the NP-hard problem in terms of operation complexity in end-edge-cloud Internet of Things systems, which is able to allocate computing resources within an edge-enabled infrastructure to ensure computing task to be completed with minimum cost. A presorting scheme based on Johnson's rule is developed and applied to preprocess the two-stage tasks on multiple executors, and a DRL mechanism is developed to minimize the overall makespan based on a newly designed instant reward that takes into account the maximal utilization of each executor in edge-enabled two-stage scheduling. The performance of our method is evaluated and compared with three existing scheduling techniques, and experimental results demonstrate the ability of our proposed algorithm in achieving better learning efficiency and scheduling performance with a 1.1-approximation to the targeted optimal IoE applications.
AB - Nowadays, the concept of Internet of Everything (IoE) is becoming a hotly discussed topic, which is playing an increasingly indispensable role in modern intelligent applications. These applications are known for their real-time requirements under limited network and computing resources, thus it becomes a highly demanding task to transform and compute tremendous amount of raw data in a cloud center. The edge-cloud computing infrastructure allows a large amount of data to be processed on nearby edge nodes and then only the extracted and encrypted key features are transmitted to the data center. This offers the potential to achieve an end-edge-cloud-based big data intelligence for IoE in a typical two-stage data processing scheme, while satisfying a data security constraint. In this study, a deep-reinforcement-learning-enhanced two-stage scheduling (DRL-TSS) model is proposed to address the NP-hard problem in terms of operation complexity in end-edge-cloud Internet of Things systems, which is able to allocate computing resources within an edge-enabled infrastructure to ensure computing task to be completed with minimum cost. A presorting scheme based on Johnson's rule is developed and applied to preprocess the two-stage tasks on multiple executors, and a DRL mechanism is developed to minimize the overall makespan based on a newly designed instant reward that takes into account the maximal utilization of each executor in edge-enabled two-stage scheduling. The performance of our method is evaluated and compared with three existing scheduling techniques, and experimental results demonstrate the ability of our proposed algorithm in achieving better learning efficiency and scheduling performance with a 1.1-approximation to the targeted optimal IoE applications.
KW - Deep reinforcement learning
KW - Internet of Everything (IoE)
KW - edge computing
KW - makespan
KW - two-stage scheduling
UR - http://www.scopus.com/inward/record.url?scp=85131767900&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131767900&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2022.3179231
DO - 10.1109/JIOT.2022.3179231
M3 - Article
AN - SCOPUS:85131767900
SN - 2327-4662
VL - 10
SP - 3295
EP - 3304
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 4
ER -