The labels used to train machine learning (ML) models are of paramount importance. Typically for ML classification tasks, datasets contain hard labels, yet learning using soft labels has been shown to yield benefits for model generalization, robustness, and calibration. Earlier work found success in forming soft labels from multiple annotators' hard labels; however, this approach may not converge to the best labels and necessitates many annotators, which can be expensive and inefficient. We focus on efficiently eliciting soft labels from individual annotators. We collect and release a dataset of soft labels for CIFAR-10 via a crowdsourcing study ($N=242$). We demonstrate that learning with our labels achieves comparable model performance to prior approaches while requiring far fewer annotators. Our elicitation methodology therefore shows promise towards enabling practitioners to enjoy the benefits of improved model performance and reliability with fewer annotators, and serves as a guide for future dataset curators on the benefits of leveraging richer information, such as categorical uncertainty, from individual annotators.
翻译:用于培训机器学习(ML)模型的标签至关重要。 一般来说,对于ML分类任务,数据集包含硬标签,但使用软标签的学习却证明对模型的概括性、稳健性和校准性有好处。 早期的工作发现,从多批注员的硬标签制成软标签是成功的; 但是,这种方法可能无法与最佳标签相融合,需要许多说明者,这可能会是昂贵和低效的。 我们侧重于从个别说明者那里有效获取软标签。 我们通过众包研究收集和发布一个CIRA-10软标签数据集。 我们证明,用我们的标签制成的学习取得了与以往方法相似的模型性能,同时需要的警告者要少得多。 因此,我们的引证方法表明,希望使从业人员能够享受改进模型性能和可靠性的好处,而使用较少的警告者,并成为未来数据集管理员关于利用个人更丰富信息的好处的指南,例如明确的不确定性。