CART (algorithm)

from Wikipedia, the free encyclopedia

CART ( C lassification a nd R egression T rees ) is an algorithm that for decision-making is used. It is used in decision trees .

The CART algorithm was first used in 1984 by Leo Breiman et al. published.

General

An important feature of the CART algorithm is that only binary trees can be generated, which means that there are always exactly two branches at each branch. So the central element of this algorithm is finding an optimal binary separation.

With the CART algorithm, the selection of attributes is controlled by maximizing the information content. CARTs are characterized by the fact that they optimally separate the data in terms of classification . This is achieved with a threshold value that is searched for for each attribute. The information content of an attribute is considered to be high if a classification can be made with a high hit rate by evaluating the attribute characteristics resulting from the division via the threshold values. For the decision trees, which are calculated by the CART algorithm, the following applies: The higher the information content of an attribute in relation to the target variable, the higher up in the tree this attribute can be found.

The decision thresholds result from the optimization of the column entropy . The total entropies of the attributes result from a weighted mean of the column entropies.

Mathematical description

Let be the amount of training data with input variables and output values . A decision tree can be formally represented by means of a function that assigns a prediction of the output value to each input. The CART algorithm for generating such a tree independently finds the branches ( nodes ) and associated rules for separating the data (enS split rule ), with which this assignment is as optimal as possible.

Regression

First of all , d. H. the output is a quantitative characteristic and the tree to be constructed is intended to solve a regression problem. In order to find an optimal tree, a training criterion ( error function ) must first be defined. The mean square deviation is typically used for this :

,

which is then minimized. The error function can, however, generally be freely selected.

A set is assigned to each leaf , so that the assigned disjoint sets form a partition of for all leaves . You are now looking for an estimated value that is as close as possible to the true values for all . The appraiser

provides a solution for this. Since the computation of all possible trees cannot be implemented in an efficient way, a greedy algorithm is best suited for this approach . Specifically: You start with a tree that consists of only one node and then successively find locally optimal branches. At each node one determines the feature that can best subdivide all entries of the parent node into two regions, whereby an optimal division rule must always be found. For ordinal features, this is done by means of a barrier that generates the two regions and for all in the original partition of the parent node in such a way that the error function is minimized. If the characteristics are not ordinal, the branching rules are based on the assignment to the various characteristics. Formally, this can be written as

,

whereby the two sums are minimized.

Starting from the individual node, two new nodes are added in each step, which in turn are branched further until a termination condition (e.g. the maximum path length from the root to the leaves) is met.

Pruning

Since the tree so in most cases too complex, so prone to overfitting ( English overfitting ), may (should), he will be trimmed ( English pruning ). Overfitting can be prevented by introducing a regulation term (see English regularization ) in the error function that does not depend on the data and penalizes the complexity of the decision tree. This prevents the tree from learning specific properties of the training data, which in general (i.e. for other data from the same population) do not contribute to the true predictions.

The second option, which is described below, is to first construct a relatively large tree , which is then pruned afterwards. Let be a real subgraph that can be created by removing inner nodes (i.e. the partitions of the children of this node are merged). Let be the number of leaves of such a subgraph, where each leaf is assigned the partition with elements. Be like above and

.

The idea is to find a tree that will do the job

minimized for a given . For this purpose, a different amount of data ( English test set ) is used to prevent overfitting (see cross-validation procedure ).

The freely selectable parameter describes the weighting between the quality of the prediction of the decision tree and its complexity. For a given , a descending sequence of subtrees is created by removing, at each step, the inner node that produces the smallest per-node increase of , until there is only a single node left. There is then a clearly determinable smallest subtree that minimizes.

Classification

Now let the output values ​​be categorical, i.e. H. is a finite set and o. B. d. A. . The only changes to the algorithm compared to regression concern the error functions for the construction of the tree and the pruning.

We define

,

where is equal to the number of elements in and the indicator function .

This means that the entries can now be classified according to majority decision in each region :

.

Possible error functions indicate the so-called impurity of the partitions and can be defined as:

(Misclassification error)
(Gini Index)
( Cross entropy )

Each of these functions can be used in the construction of a decision tree instead of the mean square deviation, so that the essential part of the algorithm remains unchanged.

See also

literature

Individual evidence

  1. ^ L. Breiman, JH Friedman, RA Olshen, CJ Stone: CART: Classification and Regression Trees . Wadsworth: Belmont, CA, 1984.
  2. ^ T. Grubinger, A. Lineis, K.-P. Pfeiffer: evtree: Evolutionary Learning of Globally Optimal Classification and Regression Trees in R . Journal of Statistical Software. 2014, Volume 61, Issue 1