The last decade has shed some light on theoretical properties such as their consistency for regression tasks. In the current paper, we propose a new class of very simple learners based on so-called naive trees. These naive trees partition the feature space completely at random and independent of the data. Although counter-intuitive, we prove these naive trees and ensembles are consistent under fairly general assumptions. However, naive trees appear to be too simple for actual application. We therefore analyze their finite sample properties in a simulation and small benchmark study. We find a slow convergence speed and a rather poor predictive performance. Based on these results, we finally discuss to what extent consistency proofs help to justify the application of complex learning algorithms.
翻译:暂无翻译