Decision forests (Forests), in particular random forests and gradient boosting trees, have demonstrated state-of-the-art accuracy compared to other methods in many supervised learning scenarios. In particular, Forests dominate other methods in tabular data, that is, when the feature space is unstructured, so that the signal is invariant to a permutation of the feature indices. However, in structured data lying on a manifold (such as images, text, and speech) deep networks (Networks), specifically convolutional deep networks (ConvNets), tend to outperform Forests. We conjecture that at least part of the reason for this is that the input to Networks is not simply the feature magnitudes, but also their indices. In contrast, naive Forest implementations fail to explicitly consider feature indices. A recently proposed Forest approach demonstrates that Forests, for each node, implicitly sample a random matrix from some specific distribution. These Forests, like some classes of Networks, learn by partitioning the feature space into convex polytopes corresponding to linear functions. We build on that approach and show that one can choose distributions in a manifold-aware fashion to incorporate feature locality. We demonstrate the empirical performance on data whose features live on three different manifolds: a torus, images, and time-series. Moreover, we demonstrate its strength in multivariate simulated settings and also show superiority in predicting surgical outcome in epilepsy patients and predicting movement direction from raw stereotactic EEG data from non-motor brain regions. In all simulations and real data, Manifold Oblique Random Forest (MORF) algorithm outperforms approaches that ignore feature space structure and challenges the performance of ConvNets. Moreover, MORF runs fast and maintains interpretability and theoretical justification.
翻译:确定型森林(森林),特别是随机森林和梯度助推树,已经在许多监督的学习情景中展示出与其它方法相比最先进的准确性。 特别是,森林在表格数据中以其他方法为主, 也就是说, 当特性空间没有结构化时, 信号是异于特征指数的变异。 然而, 在结构化数据中, 位于多个( 图像、 文本和语言) 深层网络( 网络) (网络) 上, 特别是连动深度网络( ConverialNets), 往往超越森林。 我们推测, 原因之一至少在于, 向网络输入的不是简单的特征数量, 而是它们的指数。 相反, 成熟的森林执行没有明确考虑特征指数。 最近提出的森林方法表明, 森林, 对于每个节点, 隐含某种特定分布的随机矩阵( 如图案) 。 这些森林, 就像一些网络的类别, 通过将特性空间分隔成直流和直线性功能。 我们以此方法为基础, 并表明, 一个非用户可以选择在 IM- 直径直径直流的直径直径直径直方的直方的直方的直方 结构结构结构结构结构中, 结构结构中, 运行中, 和直方数据运行的运行的流数据运行的运行的运行的运行运行数据运行运行运行运行, 。