# Banzhaf random forests: Cooperative game theory based random forests with consistency

**Authors: **Sun, J., Zhong, G., Huang, K. and Dong, J.

http://eprints.bournemouth.ac.uk/33294/

**Journal:** Neural Networks

**Volume:** 106

**Publisher:** Elsevier

**ISSN:** 0893-6080

Random forests algorithms have been widely used in many classification and regression applications. However, the theory of random forests lags far behind their applications. In this paper, we propose a novel random forests classification algorithm based on cooperative game theory. The Banzhaf power index is employed to evaluate the power of each feature by traversing possible feature coalitions. Hence, we call the proposed algorithm Banzhaf random forests (BRFs). Unlike the previously used information gain ratio, which only measures the power of each feature for classification and pays less attention to the intrinsic structure of the feature variables, the Banzhaf power index can measure the importance of each feature by computing the dependency among the group of features. More importantly, we have proved the consistency of BRFs, which narrows the gap between the theory and applications of random forests. Extensive experiments on several UCI benchmark data sets and three real world applications show that BRFs perform significantly better than existing consistent random forests on classification accuracy, and better than or at least comparable with Breiman’s random forests, support vector machines (SVMs) and k-nearest neighbors (KNNs) classifiers.

This data was imported from PubMed:

**Authors: **Sun, J., Zhong, G., Huang, K. and Dong, J.

http://eprints.bournemouth.ac.uk/33294/

**Journal:** Neural Netw

**Volume:** 106

**Pages:** 20-29

**eISSN:** 1879-2782

**DOI:** 10.1016/j.neunet.2018.06.006

Random forests algorithms have been widely used in many classification and regression applications. However, the theory of random forests lags far behind their applications. In this paper, we propose a novel random forests classification algorithm based on cooperative game theory. The Banzhaf power index is employed to evaluate the power of each feature by traversing possible feature coalitions. Hence, we call the proposed algorithm Banzhaf random forests (BRFs). Unlike the previously used information gain ratio, which only measures the power of each feature for classification and pays less attention to the intrinsic structure of the feature variables, the Banzhaf power index can measure the importance of each feature by computing the dependency among the group of features. More importantly, we have proved the consistency of BRFs, which narrows the gap between the theory and applications of random forests. Extensive experiments on several UCI benchmark data sets and three real world applications show that BRFs perform significantly better than existing consistent random forests on classification accuracy, and better than or at least comparable with Breiman's random forests, support vector machines (SVMs) and k-nearest neighbors (KNNs) classifiers.

This data was imported from Scopus:

**Authors: **Sun, J., Zhong, G., Huang, K. and Dong, J.

http://eprints.bournemouth.ac.uk/33294/

**Journal:** Neural Networks

**Volume:** 106

**Pages:** 20-29

**eISSN:** 1879-2782

**ISSN:** 0893-6080

**DOI:** 10.1016/j.neunet.2018.06.006

© 2018 Elsevier Ltd Random forests algorithms have been widely used in many classification and regression applications. However, the theory of random forests lags far behind their applications. In this paper, we propose a novel random forests classification algorithm based on cooperative game theory. The Banzhaf power index is employed to evaluate the power of each feature by traversing possible feature coalitions. Hence, we call the proposed algorithm Banzhaf random forests (BRFs). Unlike the previously used information gain ratio, which only measures the power of each feature for classification and pays less attention to the intrinsic structure of the feature variables, the Banzhaf power index can measure the importance of each feature by computing the dependency among the group of features. More importantly, we have proved the consistency of BRFs, which narrows the gap between the theory and applications of random forests. Extensive experiments on several UCI benchmark data sets and three real world applications show that BRFs perform significantly better than existing consistent random forests on classification accuracy, and better than or at least comparable with Breiman's random forests, support vector machines (SVMs) and k-nearest neighbors (KNNs) classifiers.

This data was imported from Web of Science (Lite):

**Authors: **Sun, J., Zhong, G., Huang, K. and Dong, J.

http://eprints.bournemouth.ac.uk/33294/

**Journal:** NEURAL NETWORKS

**Volume:** 106

**Pages:** 20-29

**eISSN:** 1879-2782

**ISSN:** 0893-6080

**DOI:** 10.1016/j.neunet.2018.06.006