Partial-label learning (PLL) generally focuses on inducing a noise-tolerant multi-class classifier by training on overly-annotated samples, each of which is annotated with a set of labels, but only one is the valid label. A basic promise of existing PLL solutions is that there are sufficient partial-label (PL) samples for training. However, it is more common than not to have just few PL samples at hand when dealing with new tasks. Furthermore, existing few-shot learning algorithms assume precise labels of the support set; as such, irrelevant labels may seriously mislead the meta-learner and thus lead to a compromised performance. How to enable PLL under a few-shot learning setting is an important problem, but not yet well studied. In this paper, we introduce an approach called FsPLL (Few-shot PLL). FsPLL first performs adaptive distance metric learning by an embedding network and rectifying prototypes on the tasks previously encountered. Next, it calculates the prototype of each class of a new task in the embedding network. An unseen example can then be classified via its distance to each prototype. Experimental results on widely-used few-shot datasets (Omniglot and miniImageNet) demonstrate that our FsPLL can achieve a superior performance than the state-of-the-art methods across different settings, and it needs fewer samples for quickly adapting to new tasks.
翻译:部分标签学习(PLL)通常侧重于通过对多标记过量的样本进行培训,引导一个耐噪音的多级分类器(PLL),每个样本都配有一组标签,但只有一组是有效的标签。现有的PLL解决方案的基本许诺是有足够的部分标签(PL)样本来进行培训。然而,在处理新任务时,比起手头只有少量的PL样本来做PL学习(PLL)要常见得多。此外,现有的微小的学习算法会给支持组贴上精确的标签;因此,不相关的标签可能会严重误导元独家企业,从而导致一个失色的性能。如何在短短短的学习设置下启用PLLL是一个重要问题,但还没有很好地研究。在本文中,我们引入了称为FPLL(FW-shot PLL) 样本的方法。FSL首先通过嵌入网络进行适应性的远程学习,并纠正以前遇到的任务的原型。接着,它计算嵌入网络中每类新任务的原型的原型。然后,一个隐形示例可以通过其距离对每个样本进行分类,然后通过每个微示例分类,然后通过它的分类,通过它可以对每个原型模型进行分类,然后通过它进行分类,然后通过每一个的距离进行分类,将一个不见的服务器对每个原型的样品对每个原型的样品进行分类,对每个样进行分类,对每个样的图像的样品进行分类,对每个样的样品进行升级式的样品进行分类,然后通过它进行测试,然后可以对每个样式的样品进行实验性能性能的性能的样品显示。