Many active learning and search approaches are intractable for large-scale industrial settings with billions of unlabeled examples. Existing approaches search globally for the optimal examples to label, scaling linearly or even quadratically with the unlabeled data. In this paper, we improve the computational efficiency of active learning and search methods by restricting the candidate pool for labeling to the nearest neighbors of the currently labeled set instead of scanning over all of the unlabeled data. We evaluate several selection strategies in this setting on three large-scale computer vision datasets: ImageNet, OpenImages, and a de-identified and aggregated dataset of 10 billion images provided by a large internet company. Our approach achieved similar mean average precision and recall as the traditional global approach while reducing the computational cost of selection by up to three orders of magnitude, thus enabling web-scale active learning.
翻译:许多积极的学习和搜索方法对大型工业环境来说是难以解决的,有数十亿个未贴标签的例子。 现有办法在全球范围搜索最佳范例,用未贴标签的数据标出、线性缩放甚至以四方形标出。 在本文中,我们通过限制候选人在目前贴标签的数据集附近贴上标签,而不是扫描所有未贴标签的数据,提高了积极学习和搜索方法的计算效率。 我们评估了三个大型计算机视觉数据集(图像网、OpenIgages和一个大型互联网公司提供的100亿图象的分解和汇总数据集)的这一设置中的若干选择战略。 我们的方法达到了类似的平均精确度,并将其作为传统的全球方法回顾,同时将选择的计算成本降低到三个数量级,从而使得网络规模的积极学习成为可能。