Decision trees are popular classification models, providing high accuracy and intuitive explanations. However, as the tree size grows the model interpretability deteriorates. Traditional tree-induction algorithms, such as C4.5 and CART, rely on impurity-reduction functions that promote the discriminative power of each split. Thus, although these traditional methods are accurate in practice, there has been no theoretical guarantee that they will produce small trees. In this paper, we justify the use of a general family of impurity functions, including the popular functions of entropy and Gini-index, in scenarios where small trees are desirable, by showing that a simple enhancement can equip them with complexity guarantees. We consider a general setting, where objects to be classified are drawn from an arbitrary probability distribution, classification can be binary or multi-class, and splitting tests are associated with non-uniform costs. As a measure of tree complexity, we adopt the expected cost to classify an object drawn from the input distribution, which, in the uniform-cost case, is the expected number of tests. We propose a tree-induction algorithm that gives a logarithmic approximation guarantee on the tree complexity. This approximation factor is tight up to a constant factor under mild assumptions. The algorithm recursively selects a test that maximizes a greedy criterion defined as a weighted sum of three components. The first two components encourage the selection of tests that improve the balance and the cost-efficiency of the tree, respectively, while the third impurity-reduction component encourages the selection of more discriminative tests. As shown in our empirical evaluation, compared to the original heuristics, the enhanced algorithms strike an excellent balance between predictive accuracy and tree complexity.
翻译:决策树是流行的分类模型,提供高精度和直观的解释。然而,随着树的面积增长,模型解释能力会恶化。传统的植树介绍算法,如C4.5和CART,依赖于促进每种分解的偏差力的杂质减少功能。因此,尽管这些传统方法在实践上是准确的,但并没有理论保证它们会产生小树。在本文中,我们证明在小树需要的情景下,使用一个杂质功能的一般组合,包括流行的诱变和吉尼指数功能,表明简单的精度提高可以使其具备复杂的保证。我们考虑一个总设置,即从任意的概率分布中提取标定的物体,分类可以是二分级或多级的,分解测试与非统一成本相联系。作为树复杂性的衡量尺度,我们采用预期的成本来分类从投入分布中提取的物体,在统一成本的情况下,是预期的数。我们建议一个树上介绍的算法,通过简单的精度提高精确度的精确度,在树的精度的精度中分别选择一个精度的精度的精度精确度测试。这个精确度的精度标准是精确度的精度标准,在树的精度的精度的精度的精度的精度的精度的精度的精度的精度上,在树的精度的精度的精度的精度的精度的精度的精度的精度标准是精确度的精度的精度的精度的精度标准。