Correntropy-based density-preserving data sampling as an alternative to standard cross-validation

This source preferred by Marcin Budka

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/21012/

http://ieeexplore.ieee.org/search/srchabstract.jsp?tp=&arnumber=5596717&queryText%3DBudka%26openedRefinements%3D*%26searchField%3DSearch+All

Start date: 18 July 2010

Pages: 1-8

Publisher: IEEE

ISBN: 9781424481262

ISSN: 1098-7576

DOI: 10.1109/IJCNN.2010.5596717

Estimation of the generalization ability of a predictive model is an important issue, as it indicates expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures like cross–validation (CV) or bootstrap are stochastic and thus require multiple repetitions in order to produce reliable results, which can be computationally expensive if not prohibitive. The correntropy–based Density Preserving Sampling procedure (DPS) proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets, which are guaranteed to be representative of the input dataset. This allows to produce low variance error estimates with accuracy comparable to 10 times repeated cross–validation at a fraction of computations required by CV, which has been investigated using a set of publicly available benchmark datasets and standard classifiers.

This source preferred by Marcin Budka

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/21012/

http://ieeexplore.ieee.org/search/srchabstract.jsp?tp=&arnumber=5596717&queryText%3DBudka%26openedRefinements%3D*%26searchField%3DSearch+All

Pages: 1-8

Publisher: IEEE

ISBN: 978-1-4244-6916-1

DOI: 10.1109/IJCNN.2010.5596717

Estimation of the generalization ability of a predictive model is an important issue, as it indicates expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures like cross–validation (CV) or bootstrap are stochastic and thus require multiple repetitions in order to produce reliable results, which can be computationally expensive if not prohibitive. The correntropy–based Density Preserving Sampling procedure (DPS) proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets, which are guaranteed to be representative of the input dataset. This allows to produce low variance error estimates with accuracy comparable to 10 times repeated cross–validation at a fraction of computations required by CV, which has been investigated using a set of publicly available benchmark datasets and standard classifiers.

This source preferred by Marcin Budka

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/21012/

Estimation of the generalization ability of a predictive model is an important issue, as it indicates expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures like cross–validation (CV) or bootstrap are stochastic and thus require multiple repetitions in order to produce reliable results, which can be computationally expensive if not prohibitive. The correntropy–based Density Preserving Sampling procedure (DPS) proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets, which are guaranteed to be representative of the input dataset. This allows to produce low variance error estimates with accuracy comparable to 10 times repeated cross–validation at a fraction of computations required by CV, which has been investigated using a set of publicly available benchmark datasets and standard classifiers.

This data was imported from DBLP:

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/21012/

http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=5581822

Journal: IJCNN

Pages: 1-8

Publisher: IEEE

ISBN: 978-1-4244-6916-1

DOI: 10.1109/IJCNN.2010.5596717

This data was imported from Scopus:

Authors: Budka, M. and Gabrys, B.

http://eprints.bournemouth.ac.uk/21012/

Journal: Proceedings of the International Joint Conference on Neural Networks

ISBN: 9781424469178

DOI: 10.1109/IJCNN.2010.5596717

Estimation of the generalization ability of a predictive model is an important issue, as it indicates expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures like cross-validation (CV) or bootstrap are stochastic and thus require multiple repetitions in order to produce reliable results, which can be computationally expensive if not prohibitive. The correntropy-based Density Preserving Sampling procedure (DPS) proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets, which are guaranteed to be representative of the input dataset. This allows to produce low variance error estimates with accuracy comparable to 10 times repeated cross-validation at a fraction of computations required by CV, which has been investigated using a set of publicly available benchmark datasets and standard classifiers. © 2010 IEEE.

This data was imported from Web of Science (Lite):

Authors: Budka, M., Gabrys, B. and IEEE

http://eprints.bournemouth.ac.uk/21012/

Journal: 2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010

ISSN: 1098-7576

The data on this page was last updated at 04:42 on September 20, 2017.