Tree-based methods such as Random Forests are learning algorithms that have become an integral part of the statistical toolbox. The last decade has shed some light on theoretical properties such as their consistency for regression tasks. However, the usual proofs assume normal error terms as well as an additive regression function and are rather technical. We overcome these issues by introducing a simple and catchy technique for proving consistency under quite general assumptions. To this end, we introduce a new class of naive trees, which do the subspacing completely at random and independent of the data. We then give a direct proof of their consistency. Using them to bound the error of more complex tree-based approaches such as univariate and multivariate CARTs, Extra Randomized Trees, or Random Forests, we deduce the consistency of all of them. Since naive trees appear to be too simple for actual application, we further analyze their finite sample properties in a simulation and small benchmark study. We find a slow convergence speed and a rather poor predictive performance. Based on these results, we finally discuss to what extent consistency proofs help to justify the application of complex learning algorithms.
翻译:暂无翻译