We develop a theoretical framework for the analysis of oblique decision trees, where the splits at each decision node occur at linear combinations of the covariates (as opposed to conventional tree constructions that force axis-aligned splits involving only a single covariate). While this methodology has garnered significant attention from the computer science and optimization communities since the mid-80s, the advantages they offer over their axis-aligned counterparts remain only empirically justified, and explanations for their success are largely based on heuristics. Filling this long-standing gap between theory and practice, we show that oblique regression trees (constructed by recursively minimizing squared error) satisfy a type of oracle inequality and can adapt to a rich library of regression models consisting of linear combinations of ridge functions and their limit points. This provides a quantitative baseline to compare and contrast decision trees with other less interpretable methods, such as projection pursuit regression and neural networks, which target similar model forms. Contrary to popular belief, one need not always trade-off interpretability with accuracy. Specifically, we show that, under suitable conditions, oblique decision trees achieve similar predictive accuracy as neural networks for the same library of regression models. To address the combinatorial complexity of finding the optimal splitting hyperplane at each decision node, our proposed theoretical framework can accommodate many existing computational tools in the literature. Our results rely on (arguably surprising) connections between recursive adaptive partitioning and sequential greedy approximation algorithms for convex optimization problems (e.g., orthogonal greedy algorithms), which may be of independent theoretical interest.
翻译:我们为分析斜度决定树制定了一个理论框架,其中每个决定节点的分裂发生在共差线性组合中(而不是传统树状结构,后者迫使轴对齐的分裂只涉及单一共差 ) 。 虽然自80年代中期以来,这种方法从计算机科学和优化社区中引起了大量关注,但它们对轴对齐的对应方提供的优势仍然只是经验上的理由,解释其成功与否在很大程度上基于超常性。填补理论和实践之间长期存在的差距,我们表明,斜度回归树(通过递现将正对流的正正正差错误最小化)满足了一种极差不平等,并能够适应由脊峰功能及其限制点的线性组合组成的回归模型的丰富图书馆。这提供了定量基线,用以比较和对比决策树与其他不易解释的方法(如预测跟踪回归和神经网络,这些方法以类似模式形式为对象。与公众信念相反,人们不一定需要以独立的解释方式进行交易。 具体地说,我们表明,在合适的模型下,对正值的上,正轨对正轨的正轨的正轨上,我们决定性结构结构结构结构结构的每部的精确度网络可以找到我们目前最接近的正轨的正轨。