Detecting out-of-distribution (OOD) samples plays a key role in open-world and safety-critical applications such as autonomous systems and healthcare. Self-supervised representation learning techniques (e.g., contrastive learning and pretext learning) are well suited for learning representation that can identify OOD samples. In this paper, we propose a simple framework that leverages multi-task transformation learning for training effective representation for OOD detection which outperforms state-of-the-art OOD detection performance and robustness on several image datasets. We empirically observe that the OOD performance depends on the choice of data transformations which itself depends on the in-domain training set. To address this problem, we propose a simple mechanism for selecting the transformations automatically and modulate their effect on representation learning without requiring any OOD training samples. We characterize the criteria for a desirable OOD detector for real-world applications and demonstrate the efficacy of our proposed technique against a diverse range of the state-of-the-art OOD detection techniques.
翻译:自我监督的代表性学习技术(例如对比学习和借口学习)非常适合能够识别OOD样本的学习代表性。在本文件中,我们提议一个简单的框架,利用多任务转换学习来培训对OOD检测的有效代表性,这比OOOD检测的最先进性能和若干图像数据集的稳健性要好。我们从经验上观察到,OOD的性能取决于数据转换的选择,而数据转换本身取决于内部培训。为了解决这一问题,我们提出了一个简单机制,用于自动选择转换,并在不要求OOODD培训样本的情况下调整其对代表学习的影响。我们为现实世界应用中理想的OOD检测器确定标准,并表明我们所提议的技术在应对多种最先进的OOD检测技术方面的效力。