Universal anomaly detection still remains a challenging prob- lem in machine learning and medical image analysis. It is possible to learn an expected distribution from a single class of normative samples, e.g., through epistemic uncertainty estimates, auto-encoding models, or from synthetic anomalies in a self-supervised way. The performance of self-supervised anomaly detection approaches is still inferior compared to methods that use examples from known unknown classes to shape the decision boundary. However, outlier exposure methods often do not identify unknown unknowns. Here we discuss an improved self-supervised single-class training strategy that supports the approximation of proba- bilistic inference with loosen feature locality constraints. We show that up-scaling of gradients with histogram-equalised images is beneficial for recently proposed self-supervision tasks. Our method is integrated into several out-of-distribution (OOD) detection models and we show evi- dence that our method outperforms the state-of-the-art on various bench- mark datasets. Source code will be publicly available by the time of the conference.
翻译:普适异常检测仍然是机器学习和医学图像分析中的一个具有挑战性的问题。可以从一类正常样本中学习预期分布,例如通过认知不确定性估计、自编码模型或从自监督的方式中合成异常。自监督异常检测方法的性能仍然不如使用已知未知类别的示例来形成决策边界的方法。然而,异常暴露方法往往不能识别未知的未知。在此我们讨论了一种改进的自监督单类别训练策略,它支持宽松特征局部性约束下的概率推理近似。我们展示了使用直方均衡化图像缩放梯度对最近提出的自监督任务有益。我们的方法整合到了几种越界检测模型中,并且我们展示了我们的方法在各种基准数据集上优于现有技术的证据。在会议之前,我们将公开可用源代码。