In deep neural learning, a discriminator trained on in-distribution (ID) samples may make high-confidence predictions on out-of-distribution (OOD) samples. This triggers a significant matter for robust, trustworthy and safe deep learning. The issue is primarily caused by the limited ID samples observable in training the discriminator when OOD samples are unavailable. We propose a general approach for \textit{fine-tuning discriminators by implicit generators} (FIG). FIG is grounded on information theory and applicable to standard discriminators without retraining. It improves the ability of a standard discriminator in distinguishing ID and OOD samples by generating and penalizing its specific OOD samples. According to the Shannon entropy, an energy-based implicit generator is inferred from a discriminator without extra training costs. Then, a Langevin dynamic sampler draws specific OOD samples for the implicit generator. Lastly, we design a regularizer fitting the design principle of the implicit generator to induce high entropy on those generated OOD samples. The experiments on different networks and datasets demonstrate that FIG achieves the state-of-the-art OOD detection performance.
翻译:在深入的神经学习中,接受过分配(ID)样本培训的歧视问题可能对分配(OOOD)样本作出高度自信的预测。这引发了强有力、可信和安全的深层学习的一个重要问题。问题主要在于当OOOD样本不存在时,在培训歧视者时观察到的有限身份样本。我们建议对隐含发电机的输出器进行具体的OOD样本。FIG基于信息理论并适用于标准的区分标准歧视者而无需再培训。它通过生成和惩罚其具体的OOOD样本,提高了标准歧视者区分ID样本和OD样本的能力。根据香农环球,一个基于能源的隐含生成器在不增加培训费用的情况下从歧视者中推断出来。然后,一个Langevin动态取样器为隐含发电机抽取具体的OOD样本。最后,我们设计一个符合隐含发电机设计原则的正规化器,以便在生成的OOD样本上诱导出高增音。关于不同网络和数据集的实验表明,FIG实现了状态的OD检测性。