Adversarial self-training for robustness and generalization

Authors: Li, Z., Wu, M., Jin, C., Yu, D. and Yu, H.

Journal: Pattern Recognition Letters

Volume: 185

Pages: 117-123

ISSN: 0167-8655

DOI: 10.1016/j.patrec.2024.07.020

Abstract:

Adversarial training is currently one of the most promising ways to achieve adversarial robustness of deep models. However, even the most sophisticated training methods is far from satisfactory, as improvement in robustness requires either heuristic strategies or more annotated data, which might be problematic in real-world applications. To alleviate these issues, we propose an effective training scheme that avoids prohibitively high cost of additional labeled data by adapting self-training scheme to adversarial training. In particular, we first use the confident prediction for a randomly-augmented image as the pseudo-label for self-training. Then we enforce the consistency regularization by targeting the adversarially-perturbed version of the same image at the pseudo-label, which implicitly suppresses the distortion of representation in latent space. Despite its simplicity, extensive experiments show that our regularization could bring significant advancement in adversarial robustness of a wide range of adversarial training methods and helps the model to generalize its robustness to larger perturbations or even against unseen adversaries.

Source: Scopus