We show how neural models can be used to realize piece-wise constant functions such as decision trees. Our approach builds on ReLU networks that are piece-wise linear and hence their associated gradients with respect to the inputs are locally constant. We formally establish the equivalence between the classes of locally constant networks and decision trees. Moreover, we highlight several advantageous properties of locally constant networks, including how they realize decision trees with parameter sharing across branching / leaves. Indeed, only $M$ neurons suffice to implicitly model an oblique decision tree with $2^M$ leaf nodes. The neural representation also enables us to adopt many tools developed for deep networks (e.g., DropConnect (Wan et al. 2013)) while implicitly training decision trees. We demonstrate that our method outperforms alternative techniques for training oblique decision trees in the context of molecular property classification and regression tasks.
翻译:我们展示了神经模型如何能用于实现决策树等整块不变功能。 我们的方法以ReLU网络为基础,这些网络是整片线性,因此其投入的相关梯度是本地固定的。 我们正式确定本地常态网络和决定树的等同性。 此外, 我们强调本地常态网络的若干优点, 包括它们如何实现决策树, 其参数在分支/ 叶之间共享。 事实上, 只有$M 的神经元足以隐含地模拟一个带有2 ⁇ M$叶节点的隐性决定树。 神经显示还使我们能够采用为深层网络开发的许多工具( 例如, DrompConect (Wan等人,2013年) ), 同时隐含地培训决策树。 我们证明我们的方法在分子财产分类和回归任务的背景下, 超越了培训偏斜决定树的替代技术。