While Generative Adversarial Networks (GANs) achieve spectacular results on unstructured data like images, there is still a gap on tabular data, data for which state of the art supervised learning still favours to a large extent decision tree (DT)-based models. This paper proposes a new path forward for the generation of tabular data, exploiting decades-old understanding of the supervised task's best components for DT induction, from losses (properness), models (tree-based) to algorithms (boosting). The \textit{properness} condition on the supervised loss -- which postulates the optimality of Bayes rule -- leads us to a variational GAN-style loss formulation which is \textit{tight} when discriminators meet a calibration property trivially satisfied by DTs, and, under common assumptions about the supervised loss, yields "one loss to train against them all" for the generator: the $\chi^2$. We then introduce tree-based generative models, \textit{generative trees} (GTs), meant to mirror on the generative side the good properties of DTs for classifying tabular data, with a boosting-compliant \textit{adversarial} training algorithm for GTs. We also introduce \textit{copycat training}, in which the generator copies at run time the underlying tree (graph) of the discriminator DT and completes it for the hardest discriminative task, with boosting compliant convergence. We test our algorithms on tasks including fake/real distinction, training from fake data and missing data imputation. Each one of these tasks displays that GTs can provide comparatively simple -- and interpretable -- contenders to sophisticated state of the art methods for data generation (using neural network models) or missing data imputation (relying on multiple imputation by chained equations with complex tree-based modeling).
翻译:生成 Adversarial Network (GANs) 在像图像这样的非结构化数据上取得了惊人的结果, 但表层数据上仍然存在着差距, 最先进的监管学习仍然有利于基于决策树的模型。 本文提出了生成表层数据的新途径, 利用数十年来对受监管任务最佳组成部分的理解, 从损失( 损耗)、 模型( 树基) 到算法( 加速)。 受监管损失的条件 -- -- 显示Bayes规则的最佳性 -- 导致我们进入一个变换式 GAN 型损失配方, 当导师满足了DTT( DT) 微不足道的校准属性时, 并且根据对受监管损失的通常假设, 产生“ 一宗损失, 用来训练所有的" 。 然后引入基于树基的基因化模型,\ textrigistration ( Gates) 和 Rentrial Trial Trial 任务 (GETs) 显示一个变现的变数, 包括变数的变数性数据变数。