Density-Preserving Sampling: Robust and Efficient Alternative to Cross-Validation for Error Estimation

This source preferred by Marcin Budka

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/20876/

Journal: IEEE Transactions on Neural Networks and Learning Systems

Volume: 24

Issue: 1

Pages: 22-34

ISSN: 1045-9227

DOI: 10.1109/TNNLS.2012.2222925

Estimation of the generalization ability of a classi- fication or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density- preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers

This data was imported from PubMed:

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/20876/

Journal: IEEE Trans Neural Netw Learn Syst

Volume: 24

Issue: 1

Pages: 22-34

ISSN: 2162-237X

DOI: 10.1109/TNNLS.2012.2222925

Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.

This data was imported from DBLP:

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/20876/

Journal: IEEE Trans. Neural Netw. Learning Syst.

Volume: 24

Pages: 22-34

DOI: 10.1109/TNNLS.2012.2222925

This data was imported from Scopus:

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/20876/

Journal: IEEE Transactions on Neural Networks and Learning Systems

Volume: 24

Issue: 1

Pages: 22-34

eISSN: 2162-2388

ISSN: 2162-237X

DOI: 10.1109/TNNLS.2012.2222925

Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers. © 2012 IEEE.

This data was imported from Web of Science (Lite):

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/20876/

Journal: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Volume: 24

Issue: 1

Pages: 22-34

ISSN: 2162-237X

DOI: 10.1109/TNNLS.2012.2222925

This data was imported from Europe PubMed Central:

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/20876/

Journal: IEEE transactions on neural networks and learning systems

Volume: 24

Issue: 1

Pages: 22-34

eISSN: 2162-2388

ISSN: 2162-237X

Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.

The data on this page was last updated at 17:31 on November 21, 2017.