We tackle the Few-Shot Open-Set Recognition (FSOSR) problem, i.e. classifying instances among a set of classes for which we only have a few labeled samples, while simultaneously detecting instances that do not belong to any known class. We explore the popular transductive setting, which leverages the unlabelled query instances at inference. Motivated by the observation that existing transductive methods perform poorly in open-set scenarios, we propose a generalization of the maximum likelihood principle, in which latent scores down-weighing the influence of potential outliers are introduced alongside the usual parametric model. Our formulation embeds supervision constraints from the support set and additional penalties discouraging overconfident predictions on the query set. We proceed with a block-coordinate descent, with the latent scores and parametric model co-optimized alternately, thereby benefiting from each other. We call our resulting formulation \textit{Open-Set Likelihood Optimization} (OSLO). OSLO is interpretable and fully modular; it can be applied on top of any pre-trained model seamlessly. Through extensive experiments, we show that our method surpasses existing inductive and transductive methods on both aspects of open-set recognition, namely inlier classification and outlier detection.
翻译:我们解决了少许热开放识别(FSOSSR)问题,即将我们仅有少数贴标签样本的样本的事例分类为一组类别,同时探测不属于任何已知类别的情况。我们探索了流行的感官环境,利用未贴标签的查询实例进行推断。我们从以下观察出发,即现有的感官方法在开放情景中表现不佳,我们建议对最大可能性原则进行概括化,在这种原则中,潜在偏差的分数会与通常的参数模型相提并论。我们的配方包含着从支助中引入的监督限制和额外的惩罚,阻止在查询集上作出过于自信的预测。我们以块相容的下降,同时利用未贴标签的质谱和准度模型,从而相互利用。我们称之为我们由此得出的配方(textit{Oplo-Setliblime Opitimization}(OSLO)是可以解释和完全模块化的;在任何经过训练前模型检测的顶部,即无缝的检测方法上,我们可以应用。通过广泛的实验,我们展示了现有的无缝超度的分类方法。