Self-supervised learning (SSL) has emerged as a promising alternative to create supervisory signals to real-world tasks, avoiding extensive cost of careful labeling. SSL is particularly attractive for unsupervised problems such as anomaly detection (AD), where labeled anomalies are costly to secure, difficult to simulate, or even nonexistent. A large catalog of augmentation functions have been used for SSL-based AD (SSAD), and recent works have observed that the type of augmentation has a significant impact on performance. Motivated by those, this work sets out to put SSAD under a larger lens and carefully investigate the role of data augmentation in AD through extensive experiments on many testbeds. Our main finding is that self-supervision acts as a yet-another model hyperparameter, and should be chosen carefully in regards to the nature of true anomalies in the data. That is, the alignment between the augmentation and the underlying anomaly-generating mechanism is the key for the success of SSAD, and in the lack thereof, SSL can even impair (!) detection performance. Moving beyond proposing another SSAD method, our study contributes to the better understanding of this growing area and lays out new directions for future research.
翻译:自我监督的学习(SSL)已成为一种大有希望的替代方法,为现实世界的任务创造监督信号,避免大量仔细标签的成本。SSL对于一些不受监督的问题特别有吸引力,如异常检测(AD),因为标签的异常现象对于安全来说代价昂贵,难以模拟,甚至根本不存在。在基于SSL的AD(SSD)中,使用了大量增强功能目录,最近的工作发现,这种增强类型对性能有重大影响。受这些驱动,这项工作将SSAD置于更大的视野之下,并通过许多测试床的广泛试验仔细调查数据增强在AD中的作用。我们的主要发现是,自我监督的视觉作为另一个模型运行,应该根据数据的真正异常性质仔细选择。这就是,增强和潜在的异常产生机制之间的配合是SSAD成功的关键,而在缺乏这种协调的情况下,SSL甚至可以损害(!)探测性能。除了提出另一种SSAD方法外,我们的研究还有助于更好地了解这一不断增长的区域,并为未来研究开辟新的方向。