We propose a novel high-performance, parameter and computationally efficient deep learning architecture for tabular data, Gated Additive Tree Ensemble(GATE). GATE uses a gating mechanism, inspired from GRU, as a feature representation learning unit with an in-built feature selection mechanism. We combine it with an ensemble of differentiable, non-linear decision trees, re-weighted with simple self-attention to predict our desired output. We demonstrate that GATE is a competitive alternative to SOTA approaches like GBDTs, NODE, FT Transformers, etc. by experiments on several public datasets (both classification and regression). The code will be uploaded as soon as the paper comes out of review.
翻译:我们为表格数据提出了一个新的高性能、参数和计算效率高的深层学习结构Gated Additive Tree Entsemble(GATE)。GATE利用一个受GRU启发的格子机制,作为具有内建特征选择机制的特征代表学习单元。我们将其与一系列不同、非线性决策树结合起来,用简单的自我意识重新加权来预测我们想要的产出。我们证明GATE是SOTA方法的竞争性替代方法,例如GBDTTs、NODE、FT变换器等,在几个公共数据集(分类和回归)上进行实验。一旦文件完成审查,该代码将立即上传。