Positive-Unlabeled (PU) learning aims to learn a model with rare positive samples and abundant unlabeled samples. Compared with classical binary classification, the task of PU learning is much more challenging due to the existence of many incompletely-annotated data instances. Since only part of the most confident positive samples are available and evidence is not enough to categorize the rest samples, many of these unlabeled data may also be the positive samples. Research on this topic is particularly useful and essential to many real-world tasks which demand very expensive labelling cost. For example, the recognition tasks in disease diagnosis, recommendation system and satellite image recognition may only have few positive samples that can be annotated by the experts. These methods mainly omit the intrinsic hardness of some unlabeled data, which can result in sub-optimal performance as a consequence of fitting the easy noisy data and not sufficiently utilizing the hard data. In this paper, we focus on improving the commonly-used nnPU with a novel training pipeline. We highlight the intrinsic difference of hardness of samples in the dataset and the proper learning strategies for easy and hard data. By considering this fact, we propose first splitting the unlabeled dataset with an early-stop strategy. The samples that have inconsistent predictions between the temporary and base model are considered as hard samples. Then the model utilizes a noise-tolerant Jensen-Shannon divergence loss for easy data; and a dual-source consistency regularization for hard data which includes a cross-consistency between student and base model for low-level features and self-consistency for high-level features and predictions, respectively.
翻译:与古典二进制分类相比,由于存在许多未加注释的数据实例,PU学习的任务更具有挑战性。由于只有部分最自信的正面样本存在,而且证据不足以对其余样本进行分类,许多未加标签的数据也可能是积极的样本。关于这一专题的研究对于许多需要非常昂贵的跨界标签费用的真实世界任务特别有用和必要。例如,疾病诊断、建议系统和卫星图像识别的识别任务可能只有很少的正面样本,而专家只能对这些数据进行附加说明。这些方法主要省略了一些未加标记的数据的内在硬性,这可能造成亚于最优化的性能,因为其结果是适应容易的杂乱数据,而没有充分利用硬性数据。在本文中,我们的重点是改进常用的NPUPU,并配有新型培训管道。我们强调,在数据集和学生简单和硬性数据的正确学习战略中,样本的硬性能差异是内在的。考虑到这一事实,因此,我们首先建议将一些不易变现的常规性数据作为基础。我们先将标定的双重数据基础,然后将标本用作一个不易的基。