DeSVig: Decentralized Swift Vigilance against Adversarial Attacks in Industrial Artificial Intelligence Systems

Gaolei Li, Kaoru Ota*, Mianxiong Dong, Jun Wu, Jianhua Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

40 Citations (Scopus)


Individually reinforcing the robustness of a single deep learning model only gives limited security guarantees especially when facing adversarial examples. In this article, we propose DeSVig, a decentralized swift vigilance framework to identify adversarial attacks in an industrial artificial intelligence systems (IAISs), which enables IAISs to correct the mistake in a few seconds. The DeSVig is highly decentralized, which improves the effectiveness of recognizing abnormal inputs. We try to overcome the challenges on ultralow latency caused by dynamics in industries using peculiarly designated mobile edge computing and generative adversarial networks. The most important advantage of our work is that it can significantly reduce the failure risks of being deceived by adversarial examples, which is critical for safety-prioritized and delay-sensitive environments. In our experiments, adversarial examples of industrial electronic components are generated by several classical attacking models. Experimental results demonstrate that the DeSVig is more robust, efficient, and scalable than some state-of-art defenses.

Original languageEnglish
Article number8892628
Pages (from-to)3267-3277
Number of pages11
JournalIEEE Transactions on Industrial Informatics
Issue number5
Publication statusPublished - 2020 May
Externally publishedYes


  • Adversarial examples
  • deep learning
  • generative adversarial networks (GAN)
  • industrial artificial intelligence systems (IAISs)
  • mobile edge computing

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Information Systems
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'DeSVig: Decentralized Swift Vigilance against Adversarial Attacks in Industrial Artificial Intelligence Systems'. Together they form a unique fingerprint.

Cite this