Reconstruction-based approaches to anomaly detection tend to fall short when applied to complex datasets with target classes that possess high inter-class variance. Similar to the idea of self-taught learning used in transfer learning, many domains are rich with \textit{similar} unlabeled datasets that could be leveraged as a proxy for out-of-distribution samples. In this paper we introduce Latent-Insensitive Autoencoder (LIS-AE) where unlabeled data from a similar domain is utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder such that it is only capable of reconstructing one task. Since the underlying goal of LIS-AE is to only reconstruct in-distribution samples, this makes it naturally applicable in the domain of class-incremental learning. We treat class-incremental learning as multiple anomaly detection tasks by adding a different latent layer for each class and use other available classes in task as negative examples to shape each latent layer. We test our model in multiple anomaly detection and class-incremental settings presenting quantitative and qualitative analysis showcasing the accuracy and the flexibility of our model for both anomaly detection and class-incremental learning.
翻译:异常点检测的基于重建的方法在应用到具有高等级差异的目标类别的目标类的复杂数据集时往往不尽如人意。与在转移学习中使用的自我学习概念类似,许多领域都拥有可以用作分配外抽样替代工具的“textit{相似}无标签数据集”。在本文中,我们引入了“低级不敏感自动编码”(LIS-AE),将来自类似领域的未贴标签数据用作负面例子,以形成常规自动编码器的潜层(瓶颈),使其只能重建一项任务。由于LIS-AE的基本目标是只重建分布内样本,因此它自然地适用于类内学习领域。我们将类中偏执学习视为多重异常检测任务,为每个类添加不同的潜伏层,并将其他可用任务类作为塑造每个潜在层的负面例子。我们用多种异常检测模型和显示定量和定性分析的分类环境来测试我们的模型,以显示在定量和定性分析中显示异常度的检测和灵活性。