Towards Adversarial Robustness via Feature Matching
Authors: Li, Z., Feng, C., Zheng, J., Wu, M. and Yu, H.
Journal: IEEE Access
Volume: 8
Pages: 88594-88603
eISSN: 2169-3536
DOI: 10.1109/ACCESS.2020.2993304
Abstract:Image classification systems are known to be vulnerable to adversarial attacks, which are imperceptibly perturbed but lead to spectacularly disgraceful classification. Adversarial training is one of the most effective defenses for improving the robustness of classifiers. We introduce an enhanced adversarial training approach in this work. Motivated by human's consistently accurate perception of surroundings, we explore the artificial attention of deep neural networks in the context of adversarial classification. We begin with an empirical analysis of how the attention of artificial systems will change as the model undergoes adversarial attacks. Observation is that the class-specific attention gets diverted and subsequently induces wrong prediction. To that end, we propose a regularizer encouraging the consistency in the artificial attention on the clean image and its adversarial counterpart. Our method shows improved empirical robustness over the state-of-the-art, secures 55.74% adversarial accuracy on CIFAR-10 with perturbation budget of 8/255 under the challenging untargeted attack in white-box settings. Further evaluations on CIFAR-100 also show our potential for a desirable boost in adversarial robustness for deep neural networks. Code and trained models of our work are available at: https://github.com/lizhuorong/Towards-Adversarial-Robustness-via-Feature-matching.
https://eprints.bournemouth.ac.uk/34221/
Source: Scopus
Towards Adversarial Robustness via Feature Matching
Authors: Li, Z., Feng, C., Zheng, J., Wu, M. and Yu, H.
Journal: IEEE Access
Volume: 8
Pages: 88594-88603
ISSN: 2169-3536
Abstract:Image classification systems are known to be vulnerable to adversarial attacks, which are imperceptibly perturbed but lead to spectacularly disgraceful classification. Adversarial training is one of the most effective defenses for improving the robustness of classifiers. We introduce an enhanced adversarial training approach in this work. Motivated by human's consistently accurate perception of surroundings, we explore the artificial attention of deep neural networks in the context of adversarial classification. We begin with an empirical analysis of how the attention of artificial systems will change as the model undergoes adversarial attacks. Observation is that the class-specific attention gets diverted and subsequently induces wrong prediction. To that end, we propose a regularizer encouraging the consistency in the artificial attention on the clean image and its adversarial counterpart. Our method shows improved empirical robustness over the state-of-the-art, secures 55.74% adversarial accuracy on CIFAR-10 with perturbation budget of 8/255 under the challenging untargeted attack in white-box settings. Further evaluations on CIFAR-100 also show our potential for a desirable boost in adversarial robustness for deep neural networks. Code and trained models of our work are available at: https://github.com/lizhuorong/Towards-Adversarial-Robustness-via-Feature-matching.
https://eprints.bournemouth.ac.uk/34221/
Source: BURO EPrints