Reconstruction-based approaches to anomaly detection tend to fall short when applied to complex datasets with target classes that possess high inter-class variance. Similar to the idea of self-taught learning used in transfer learning, many domains are rich with similar unlabelled datasets that could be leveraged as a proxy for out-of-distribution samples. In this paper we introduce Latent-Insensitive autoencoder (LIS-AE) where unlabeled data from a similar domain is utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder such that it is only capable of reconstructing one task. We provide theoretical justification for the proposed training process and loss functions along with an extensive ablation study highlighting important aspects of our model. We test our model in multiple anomaly detection settings presenting quantitative and qualitative analysis showcasing the significant performance improvement of our model for anomaly detection tasks.
翻译:异常点检测的基于重建的方法在应用到具有高等级差异的目标类的复杂数据集时往往不尽如人意。与在转移学习中使用的自我学习概念类似,许多领域都拥有类似的无标签数据集,可以用作分配外样本的替代物。在本文中,我们引入了低位不敏感自动编码器(LIS-AE ), 将类似域的未贴标签数据用作负面例子, 以形成常规自动编码器的潜层(瓶颈), 从而只能重建一项任务。我们为拟议的培训过程和损失功能提供理论依据,同时进行广泛的模拟研究,突出我们模型的重要方面。我们用多位异常点检测模型进行测试,提出定量和定性分析,表明我们异常点检测任务模型的显著性能改进。