In this paper, a robust classification-autoencoder (CAE) is proposed, which has strong ability to recognize outliers and defend adversaries. The main idea is to change the autoencoder from an unsupervised learning model into a classifier, where the encoder is used to compress samples with different labels into disjoint compression spaces and the decoder is used to recover samples from their compression spaces. The encoder is used both as a compressed feature learner and as a classifier, and the decoder is used to decide whether the classification given by the encoder is correct by comparing the input sample with the output. Since adversary samples are seemingly inevitable for the current DNN framework, the list classifier to defend adversaries is introduced based on CAE, which outputs several labels and the corresponding samples recovered by the CAE. Extensive experimental results are used to show that the CAE achieves state of the art to recognize outliers by finding almost all outliers; the list classifier gives near lossless classification in the sense that the output list contains the correct label for almost all adversaries and the size of the output list is reasonably small.
翻译:在本文中, 推荐了一个强大的分类- 自动编码器( CAE), 它具有很强的识别外源和防御对手的能力。 主要的想法是将自动编码器从一个不受监督的学习模型转换成一个分类器, 使用编码器将带有不同标签的样本压缩成脱节压缩空间, 并使用解码器从压缩空间中提取样本。 编码器既用作压缩特性学习器,又用作分类器, 使用解码器来决定编码器通过比较输入样本与输出是否正确。 由于对立器样本对于目前的 DNNN 框架似乎不可避免, 依据 CAE 引入了用于保护对手的名单分类器, 它输出了若干标签和 CAE 回收的相应样本。 广泛实验结果用来显示 CAE 达到艺术状态, 通过查找几乎所有的外源, 来识别外源; 列表分类器提供了几乎没有损失的分类, 因为输出列表包含几乎所有对手的正确标签, 并且输出列表的大小是合理小的 。