Weakly-supervised anomaly detection aims at learning an anomaly detector from a limited amount of labeled data and abundant unlabeled data. Recent works build deep neural networks for anomaly detection by discriminatively mapping the normal samples and abnormal samples to different regions in the feature space or fitting different distributions. However, due to the limited number of annotated anomaly samples, directly training networks with the discriminative loss may not be sufficient. To overcome this issue, this paper proposes a novel strategy to transform the input data into a more meaningful representation that could be used for anomaly detection. Specifically, we leverage an autoencoder to encode the input data and utilize three factors, hidden representation, reconstruction residual vector, and reconstruction error, as the new representation for the input data. This representation amounts to encode a test sample with its projection on the training data manifold, its direction to its projection and its distance to its projection. In addition to this encoding, we also propose a novel network architecture to seamlessly incorporate those three factors. From our extensive experiments, the benefits of the proposed strategy are clearly demonstrated by its superior performance over the competitive methods.
翻译:为了克服这一问题,本文件提出了一个新战略,将输入数据转化为更有意义的表达方式,用于发现异常现象。具体地说,我们利用一个自动编码器来编码输入数据,并使用三个因素,即隐藏代表、重建残余矢量和重建错误,作为输入数据的新体现。这个表示法等于将测试样品编码成对培训数据多部分的预测、其预测方向及其预测距离。除了这一编码外,我们还提议一个新网络结构,以无缝地纳入这三个因素。从我们的广泛实验中,拟议战略的效益从其优于竞争性方法中得到明显证明。