We present \emph{TabRet}, a pre-trainable Transformer-based model for tabular data. TabRet is designed to work on a downstream task that contains columns not seen in pre-training. Unlike other methods, TabRet has an extra learning step before fine-tuning called \emph{retokenizing}, which calibrates feature embeddings based on the masked autoencoding loss. In experiments, we pre-trained TabRet with a large collection of public health surveys and fine-tuned it on classification tasks in healthcare, and TabRet achieved the best AUC performance on four datasets. In addition, an ablation study shows retokenizing and random shuffle augmentation of columns during pre-training contributed to performance gains. The code is available at https://github.com/pfnet-research/tabret .
翻译:我们提出了 TabRet,这是一种针对表格数据设计的基于 Transformer 的可预训练模型。TabRet 旨在用于下游任务,其中包含未在预训练中见过的列。与其他方法不同,TabRet 有一个额外的学习步骤,在微调之前称为“再标记”,根据掩码自编码丢失来校准特征嵌入。在实验中,我们使用大量的公共健康调查数据来预训练 TabRet,并在医疗保健中进行分类任务微调,TabRet 在四个数据集上实现了最佳的 AUC 性能。此外,削减研究表明,再标记和随机洗牌增强的列在预训练期间有利于性能提升。代码可在 https://github.com/pfnet-research/tabret 找到。