Supervised learning datasets often have privileged information, in the form of features which are available at training time but are not available at test time e.g. the ID of the annotator that provided the label. We argue that privileged information is useful for explaining away label noise, thereby reducing the harmful impact of noisy labels. We develop a simple and efficient method for supervised learning with neural networks: it transfers via weight sharing the knowledge learned with privileged information and approximately marginalizes over privileged information at test time. Our method, TRAM (TRansfer and Marginalize), has minimal training time overhead and has the same test-time cost as not using privileged information. TRAM performs strongly on CIFAR-10H, ImageNet and Civil Comments benchmarks.
翻译:受监督的学习数据集往往有保密信息,其形式为培训时间可提供但测试时间无法提供的特征,例如提供标签的注解员的身份证。我们争辩说,保密信息有助于解释标签噪音,从而减少噪音的有害影响。我们开发了神经网络监督学习的简单而有效的方法:通过权重分享与特权信息所学的知识,并在测试时间将知识边缘化于特权信息。我们的方法TRAM(TRansfer和Meginalize)只有极少的培训时间,其测试时间成本与不使用特权信息的成本相同。TRAM在CIFAR-10H、图像网络和公民评论基准上表现得非常出色。