We present \emph{TabRet}, a pre-trainable Transformer-based model for tabular data. TabRet is designed to work on a downstream task that contains columns not seen in pre-training. Unlike other methods, TabRet has an extra learning step before fine-tuning called \emph{retokenizing}, which calibrates feature embeddings based on the masked autoencoding loss. In experiments, we pre-trained TabRet with a large collection of public health surveys and fine-tuned it on classification tasks in healthcare, and TabRet achieved the best AUC performance on four datasets. In addition, an ablation study shows retokenizing and random shuffle augmentation of columns during pre-training contributed to performance gains.
翻译:我们提出了\emph{TabRet},这是一种针对表格数据的可预训练Transformer模型。TabRet旨在用于包含预训练中未见特征列的下游任务。与其他方法不同,TabRet在微调之前进行了额外的学习步骤,称为\emph{重标记},它基于掩蔽自编码损失调整特征嵌入。在实验中,我们使用公共卫生调查的大量数据对TabRet进行了预训练,并在医疗保健方面的分类任务上进行了微调。TabRet在四个数据集上取得了最好的AUC性能。此外,消融研究表明,重标记和预训练期间列的随机洗牌增强有助于提高性能。