In supervised learning, obtaining a large set of fully-labeled training data is expensive. We show that we do not always need full label information on every single training example to train a competent classifier. Specifically, inspired by the principle of sufficiency in statistics, we present a statistic (a summary) of the fully-labeled training set that captures almost all the relevant information for classification but at the same time is easier to obtain directly. We call this statistic "sufficiently-labeled data" and prove its sufficiency and efficiency for finding the optimal hidden representations, on which competent classifier heads can be trained using as few as a single randomly-chosen fully-labeled example per class. Sufficiently-labeled data can be obtained from annotators directly without collecting the fully-labeled data first. And we prove that it is easier to directly obtain sufficiently-labeled data than obtaining fully-labeled data. Furthermore, sufficiently-labeled data is naturally more secure since it stores relative, instead of absolute, information. Extensive experimental results are provided to support our theory.
翻译:在监督的学习中,获取大量全标签培训数据是昂贵的。 我们显示,我们并不总是需要每个培训范例的完整标签信息来培训合格的分类员。 具体地说,在统计充分性原则的启发下,我们提供了全标签培训数据集的统计(摘要),该数据集收集了几乎所有相关信息,供分类使用,但同时又更容易直接获得。 我们称该统计数据为“充分标签数据”,并证明其充分性和效率,以便找到最佳的隐蔽表现,能够对胜任的分类员负责人进行培训,每个班只使用少量的随机选择的全标签示例。充分标签数据可以在不首先收集全标签数据的情况下直接从注解者那里获得。我们证明直接获得充分标签数据比获得全标签数据更容易。 此外,充分标签数据自然更加安全,因为它储存的是相对的,而不是绝对的信息。提供了广泛的实验结果来支持我们的理论。