Nowadays, classification and Out-of-Distribution (OoD) detection in the few-shot setting remain challenging aims due to rarity and the limited samples in the few-shot setting, and because of adversarial attacks. Accomplishing these aims is important for critical systems in safety, security, and defence. In parallel, OoD detection is challenging since deep neural network classifiers set high confidence to OoD samples away from the training data. To address such limitations, we propose the Few-shot ROBust (FROB) model for classification and few-shot OoD detection. We devise FROB for improved robustness and reliable confidence prediction for few-shot OoD detection. We generate the support boundary of the normal class distribution and combine it with few-shot Outlier Exposure (OE). We propose a self-supervised learning few-shot confidence boundary methodology based on generative and discriminative models. The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary. FROB implicitly generates strong adversarial samples on the boundary and forces samples from OoD, including our boundary, to be less confident by the classifier. FROB achieves generalization to unseen OoD with applicability to unknown, in the wild, test sets that do not correlate to the training datasets. To improve robustness, FROB redesigns OE to work even for zero-shots. By including our boundary, FROB reduces the threshold linked to the model's few-shot robustness; it maintains the OoD performance approximately independent of the number of few-shots. The few-shot robustness analysis evaluation of FROB on different sets and on One-Class Classification (OCC) data shows that FROB achieves competitive performance and outperforms benchmarks in terms of robustness to the outlier few-shot sample population and variability.
翻译:目前,在少数镜头设置中,分类和分解(OoD)检测在少数镜头设置中仍然具有挑战性,原因是稀有,在少数镜头设置中样本样本有限,还因为对抗性攻击。实现这些目标对于安全、安保和防御的关键系统十分重要。与此同时,OoD检测具有挑战性,因为深神经网络分类者对OOD样本与培训数据隔绝了高度信心。为解决这些局限性,我们建议采用微弱的ROBust(OBO)(OFO)模型进行分类和微弱的OD检测。我们设计FRO(FROB)来改进稳健性和可靠性能预测,我们设计了微弱的OD(OD)检测。我们制作了正常级别分布的支持边界的边界界限,并结合了少发的外部暴露(OOD)的界限。我们提议采用自我监督的零发信任度的边界方法。FROO(O-O-O-O-OD)的精确性能和不甚明性能、不甚甚易变的边界的对比,我们用OD(OO-O-OB-O-O-OD)的边界)的对比测试的基线显示不甚易变。