As a kind of generative self-supervised learning methods, generative adversarial nets have been widely studied in the field of anomaly detection. However, the representation learning ability of the generator is limited since it pays too much attention to pixel-level details, and generator is difficult to learn abstract semantic representations from label prediction pretext tasks as effective as discriminator. In order to improve the representation learning ability of generator, we propose a self-supervised learning framework combining generative methods and discriminative methods. The generator no longer learns representation by reconstruction error, but the guidance of discriminator, and could benefit from pretext tasks designed for discriminative methods. Our discriminative-generative representation learning method has performance close to discriminative methods and has a great advantage in speed. Our method used in one-class anomaly detection task significantly outperforms several state-of-the-arts on multiple benchmark data sets, increases the performance of the top-performing GAN-based baseline by 6% on CIFAR-10 and 2% on MVTAD.
翻译:作为一种基因自监督的学习方法,基因对抗网在异常检测领域得到了广泛的研究,然而,基因对抗网的代表性学习能力有限,因为它过于关注像素层的细节,而且发电机很难从标签预测的借口任务中学习抽象的语义表达方式,因为标签的预言任务对歧视者来说是有效的。为了提高发电机的代表性学习能力,我们提议了一个自我监督的学习框架,将基因化方法与歧视方法结合起来。发电机不再通过重建错误来学习代表性,而学习歧视者的指导,并且可以受益于为歧视方法设计的借口任务。我们的歧视-基因代表学习方法的性能接近于歧视方法,并且具有很大的优势。我们在单级异常检测任务中使用的方法大大优于多个基准数据集上的一些状态,在CIFAR-10和MVTAD上将业绩最佳的GAN基线的性能提高6%。