Uncertainty estimation based adversarial attack in multi-class classification

Authors: Alarab, I. and Prakoonwit, S.

Journal: Multimedia Tools and Applications

Volume: 82

Issue: 1

Pages: 1519-1536

eISSN: 1573-7721

ISSN: 1380-7501

DOI: 10.1007/s11042-022-13269-1

Abstract:

Model uncertainty has gained popularity in machine learning due to the overconfident predictions derived from standard neural networks which are not trustworthy. Recently, Monte-Carlo based adversarial attack (MC-AA) has been proposed as a simple uncertainty estimation method which is powerful in capturing data points that lie in the overlapping distribution of the decision boundary. MC-AA produces uncertainties by performing back-and-forth perturbations of a given data point towards the decision boundary using the idea of adversarial attacks. Despite its efficacy against other uncertainty estimation methods, this method has been only examined on binary classification problems. Thus, we present and examine MC-AA with multi-class classification tasks. We point out the limitation of this method with multiple classes which we tackle by converting multiclass problem into ‘one-versus-all’ classification. We compare MC-AA against other recent model uncertainty methods on Cora – a graph structured dataset – and MNIST – an image dataset. Thus, the conducted experiments are performed using a variety of deep learning algorithms to perform the classification. Consequently, we discuss the best results of model uncertainty with Cora data using LEConv model of AUC-score 0.889 and MNIST data using CNN of AUC-score 0.98 against other uncertainty estimation methods.

https://eprints.bournemouth.ac.uk/37047/

Source: Scopus

Uncertainty estimation based adversarial attack in multi-class classification

Authors: Alarab, I. and Prakoonwit, S.

Journal: MULTIMEDIA TOOLS AND APPLICATIONS

Volume: 82

Issue: 1

Pages: 1519-1536

eISSN: 1573-7721

ISSN: 1380-7501

DOI: 10.1007/s11042-022-13269-1

https://eprints.bournemouth.ac.uk/37047/

Source: Web of Science (Lite)

Uncertainty estimation based adversarial attack in multi-class classification

Authors: Alarab, I. and Prakoonwit, S.

Journal: Multimedia Tools and Applications

https://eprints.bournemouth.ac.uk/37047/

Source: Manual

Uncertainty estimation based adversarial attack in multi-class classification

Authors: Alarab, I. and Prakoonwit, S.

Journal: Multimedia Tools and Applications

Volume: 82

Pages: 1519-1536

ISSN: 1380-7501

Abstract:

Model uncertainty has gained popularity in machine learning due to the overconfident predictions derived from standard neural networks which are not trustworthy. Recently, Monte-Carlo based adversarial attack (MC-AA) has been proposed as a simple uncertainty estimation method which is powerful in capturing data points that lie in the overlapping distribution of the decision boundary. MC-AA produces uncertainties by performing back-and-forth perturbations of a given data point towards the decision boundary using the idea of adversarial attacks. Despite its efficacy against other uncertainty estimation methods, this method has been only examined on binary classification problems. Thus, we present and examine MC-AA with multi-class classification tasks. We point out the limitation of this method with multiple classes which we tackle by converting multiclass problem into 'one-versus-all' classification. We compare MC-AA against other recent model uncertainty methods on Cora – a graph structured dataset – and MNIST – an image dataset. Thus, the conducted experiments are performed using a variety of deep learning algorithms to perform the classification. Consequently, we discuss the best results of model uncertainty with Cora data using LEConv model of AUC-score 0.889 and MNIST data using CNN of AUC-score 0.98 against other uncertainty estimation methods.

https://eprints.bournemouth.ac.uk/37047/

Source: BURO EPrints