New measure of classifier dependency in multiple classifier systems

Authors: Ruta, D. and Gabrys, B.

Journal: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume: 2364

Pages: 127-136

eISSN: 1611-3349

ISBN: 9783540438182

ISSN: 0302-9743

DOI: 10.1007/3-540-45428-4_13

Abstract:

Recent findings in the domain of combining classifiers provide a surprising revision of the usefulness of diversity for modelling combined performance. Although there is a common agreement that a successful fusion system should be composed of accurate and diverse classifiers, experimental results show very weak correlations between various diversity measures and combining methods. Effectively neither the combined performance nor its improvement against mean classifier performance seem to be measurable in a consistent and well defined manner. At the same time the most successful diversity measures, barely regarded as measuring diversity, are based on measuring error coincidences and by doing so they move closer to the definitions of combined errors themselves. Following this trend we decided to use directly the combining error normalized within the derivable error limits as a measure of classifiers dependency. Taking into account its simplicity and representativeness we chose majority voting error for the construction of the measure. We examine this novel dependency measure for a number of real datasets and classifiers showing its ability to model combining improvements over an individual mean.

Source: Scopus

New measure of classifier dependency in multiple classifier systems

Authors: Ruta, D. and Gabrys, B.

Journal: MULTIPLE CLASSIFIER SYSTEMS

Volume: 2364

Pages: 127-136

ISSN: 0302-9743

Source: Web of Science (Lite)

New Measure of Classifier Dependency in Multiple Classifier Systems

Authors: Ruta, D. and Gabrys, B.

Editors: Roli, F. and Kittler, J.

Volume: 2364/2002

Pages: 127-136

Publisher: Springer Berlin / Heidelberg

ISBN: 978-3-540-43818-2

DOI: 10.1007/3-540-45428-4_13

Abstract:

Recent findings in the domain of combining classifiers provide a surprising revision of the usefulness of diversity for modelling combined performance.

Although there is a common agreement that a successful fusion system should be composed of accurate and diverse classifiers, experimental results show very weak correlations between various diversity measures and combining methods. Effectively neither the combined performance nor its improvement against mean classifier performance seem to be measurable in a consistent and well defined manner. At the same time the most successful diversity measures, barely regarded as measuring diversity, are based on measuring error coincidences and by doing so they move closer to the definitions of combined errors themselves. Following this trend we decided to use directly the combining error normalized within the derivable error limits as a measure of classifiers dependency.

Taking into account its simplicity and representativeness we chose majority voting error for the construction of the measure. We examine this novel dependency measure for a number of real datasets and classifiers showing its ability to model combining improvements over an individual mean.

http://www.springerlink.com/content/ehy3ckmeemmnt7xx/?p=b01efeb62f4b45da979578bc9ae2d5a6&pi=5

Source: Manual

Preferred by: Dymitr Ruta

New Measure of Classifier Dependency in Multiple Classifier Systems.

Authors: Ruta, D. and Gabrys, B.

Editors: Roli, F. and Kittler, J.

Journal: Multiple Classifier Systems

Volume: 2364

Pages: 127-136

Publisher: Springer

https://doi.org/10.1007/3-540-45428-4

Source: DBLP