A soft tree is an actively studied variant of a decision tree that updates splitting rules using the gradient method. Although it can have various tree architectures, the theoretical properties of their impact are not well known. In this paper, we formulate and analyze the Neural Tangent Kernel (NTK) induced by soft tree ensembles for arbitrary tree architectures. This kernel leads to the remarkable finding that only the number of leaves at each depth is relevant for the tree architecture in ensemble learning with infinitely many trees. In other words, if the number of leaves at each depth is fixed, the training behavior in function space and the generalization performance are exactly the same across different tree architectures, even if they are not isomorphic. We also show that the NTK of asymmetric trees like decision lists does not degenerate when they get infinitely deep. This is in contrast to the perfect binary trees, whose NTK is known to degenerate and leads to worse generalization performance for deeper trees.
翻译:软树是使用梯度法更新分解规则的决策树的一种积极研究的变种。 虽然它可以有各种树状结构, 但其影响的理论特性并不广为人知。 在本文中, 我们制定和分析软树团团为任意树状结构而诱发的神经凝固内核( NTK ) 。 这个内核引出一个惊人的发现, 每个深度只有叶子的数量与用无限多的树木共同学习的树状结构有关。 换句话说, 如果每个深度的叶子数量都固定了, 功能空间的训练行为和一般化性能在不同树状结构中完全相同, 即使它们不是畸形的。 我们还表明, 类似决定清单的不对称树的NTK不会在它们变得无限深的时候退化。 这与完美的二元树形成鲜明的对比, 后者已知的二元树会退化, 并且导致更深的树木的更差化性。