Adversarial supervised contrastive learning

Authors: Li, Z., Yu, D., Wu, M., Jin, C. and Yu, H.

Journal: Machine Learning

Volume: 112

Issue: 6

Pages: 2105-2130

eISSN: 1573-0565

ISSN: 0885-6125

DOI: 10.1007/s10994-022-06269-7

Abstract:

Contrastive learning is prevalently used in pre-training deep models, followed with fine-tuning in downstream tasks for better performance or faster training. However, pre-trained models from contrastive learning are barely robust against adversarial examples in downstream tasks since the representations learned by self-supervision may lack the robustness and also the class-wise discrimination. To tackle the above problems, we adapt the contrastive learning scheme to adversarial examples for robustness enhancement, and also extend the self-supervised contrastive approach to the supervised setting for the ability to discriminate on classes. Equipped with our new designs, we proposed adversarial supervised contrastive learning (ASCL), a novel framework for robust pre-training. Despite its simplicity, extensive experiments show that ASCL achieves significant margins in adversarial robustness over the prior arts, proceeding towards either the lightweight standard fine-tuning or adversarial fine-tuning. Moreover, ASCL also shows benefits for robustness to diverse natural corruptions, suggesting the wide applicability to all sorts of practical scenarios. Notably, ASCL demonstrate impressive results in robust transfer learning.

Source: Scopus

Adversarial supervised contrastive learning

Authors: Li, Z., Yu, D., Wu, M., Jin, C. and Yu, H.

Journal: MACHINE LEARNING

Volume: 112

Issue: 6

Pages: 2105-2130

eISSN: 1573-0565

ISSN: 0885-6125

DOI: 10.1007/s10994-022-06269-7

Source: Web of Science (Lite)