HAT: Hybrid Adversarial Training to Make Robust Deep Learning Classifiers

Authors: Ali, Y. and Wani, M.A.

Journal: Proceedings of the 2022 9th International Conference on Computing for Sustainable Global Development, INDIACom 2022

Pages: 433-436

ISBN: 9789380544441

DOI: 10.23919/INDIACom54597.2022.9763284

Abstract:

Deep learning has become state-of-the-art in real-life applications. However, current studies show that deep learning models are susceptible to adversarial attacks. Adversarial attacks are well-crafted perturbed inputs that fool the deep learning models. An adversarial attack can easily fool the classifier thus posing a threat for deep learning models while deploying them in real-world applications. Our work explores various adversarial attacks and defenses against adversaries available in the literature. We find that existing defense strategies show good results on greyscale images like MNIST and FMNIST but, the robustness of the same defense models radically decreases on RGB images like the CIFAR10 dataset. Also, the robustness of a model greatly depends on the type of adversarial examples on which the model is trained for achieving robustness. We devise a defense technique based on adversarial training, called Hybrid Adversarial Training (HAT). During training, we augment HAT with state-of-art adversarial examples crafted by combining DeepFool and FGSM attack hence increasing the robustness of deep learning models in the stipulated amount of time against a variety of attacks. Empirically performance of HAT is evaluated on cutting-edge adversarial attacks using various benchmark datasets. Our model shows good performance in terms of robustness and time than existing defense models. Our defense model can withstand the strong adversarial attack on the CIFAR10, a benchmark RGB image dataset. HAT outperforms the existing models as our model shows 15% more robustness than existing defenses. HAT is also proficient to maintain the natural accuracy of classifiers.

Source: Scopus