Property analysis of adversarially robust representation

Yoshihiro Fukuhara, Takahiro Itazuri, Hirokatsu Kataoka, Shigeo Morishima

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we address the open question: "What do adversarially robust models look at?" Recently, it has been reported in many works that there exists the trade-off between standard accuracy and adversarial robustness. According to prior works, this trade-off is rooted in the fact that adversarially robust and standard accurate models might depend on very different sets of features. However, it has not been well studied what kind of difference actually exists. In this paper, we analyze this difference through various experiments visually and quantitatively. Experimental results show that adversarially robust models look at things at a larger scale than standard models and pay less attention to fine textures. Furthermore, although it has been claimed that adversarially robust features are not compatible with standard accuracy, there is even a positive effect by using them as pre-trained models particularly in low resolution datasets.

Original languageEnglish
Pages (from-to)83-91
Number of pages9
JournalSeimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering
Volume87
Issue number1
DOIs
Publication statusPublished - 2021 Jan 5
Externally publishedYes

Keywords

  • Adversarial examples
  • Adversarial robustness
  • Interpret ability
  • Robust representation
  • Trade-off between accuracy and robustness

ASJC Scopus subject areas

  • Mechanical Engineering

Fingerprint

Dive into the research topics of 'Property analysis of adversarially robust representation'. Together they form a unique fingerprint.

Cite this