Semi-supervised anomaly detection aims to detect anomalies from normal samples using a model that is trained on normal data. With recent advancements in deep learning, researchers have designed efficient deep anomaly detection methods. Existing works commonly use neural networks to map the data into a more informative representation and then apply an anomaly detection algorithm. In this paper, we propose a method, DASVDD, that jointly learns the parameters of an autoencoder while minimizing the volume of an enclosing hyper-sphere on its latent representation. We propose an anomaly score which is a combination of autoencoder's reconstruction error and the distance from the center of the enclosing hypersphere in the latent representation. Minimizing this anomaly score aids us in learning the underlying distribution of the normal class during training. Including the reconstruction error in the anomaly score ensures that DASVDD does not suffer from the common hypersphere collapse issue since the DASVDD model does not converge to the trivial solution of mapping all inputs to a constant point in the latent representation. Experimental evaluations on several benchmark datasets show that the proposed method outperforms the commonly used state-of-the-art anomaly detection algorithms while maintaining robust performance across different anomaly classes.
翻译:半监督的异常点检测旨在使用一个经过正常数据培训的模型检测正常样本中的异常现象。 研究人员最近通过深层学习, 设计了高效的深层异常点检测方法。 现有工作通常使用神经网络将数据映射成信息性更强的演示, 然后应用异常点检测算法。 在本文中, 我们提出一种方法, DASVDDD, 共同学习自动编码器的参数, 同时将隐性代表面上附随的超细孔的体积减少到最小值。 我们提议了一个异常点评, 这是自动编码器重建错误和隐性代表面内附超细孔的中心距离的结合。 最小化这一异常点评分有助于我们在培训期间学习正常级别的基本分布。 包括异常点中的重建错误确保DASVDD不会因常见的超孔分崩溃问题而受害, 因为 DASVDDD模型并不与绘制所有输入到潜在代表面常点的微值解决方案相融合。 对几个基准数据集的实验性评估显示, 拟议的方法在不同的异常点上维持常用的状态检测。