Machine learning methods such as deep neural networks (DNNs), despite their success across different domains, are known to often generate incorrect predictions with high confidence on inputs outside their training distribution. The deployment of DNNs in safety-critical domains requires detection of out-of-distribution (OOD) data so that DNNs can abstain from making predictions on those. A number of methods have been recently developed for OOD detection, but there is still room for improvement. We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection. It relies on a novel base non-conformity measure and a new aggregation method, used in the inductive conformal anomaly detection framework, thereby guaranteeing a bounded false detection rate. We demonstrate the efficacy of iDECODe by experiments on image and audio datasets, obtaining state-of-the-art results. We also show that iDECODe can detect adversarial examples.
翻译:尽管深神经网络(DNNs)在不同领域取得了成功,但人们知道,在培训分布之外的投入方面往往会产生不正确的预测,对其高度信任。在安全关键领域部署DNS需要探测分配外数据,以便DNNS能够避免对这些数据作出预测。最近为OOD探测开发了一些方法,但仍有改进的余地。我们提议采用新的方法iDECODe,利用分布内差异来探测符合OOOD。它依靠一种新的基本不兼容性措施和新的汇总方法,用于进化兼容异常检测框架,从而保证一种受约束的虚假检测率。我们通过图像和音频数据集实验来显示iDECODe的功效,取得最新的结果。我们还表明iDECODe可以探测对抗性实例。