Quality control is a crux of crowdsourcing. While most means for quality control are organizational and imply worker selection, golden tasks, and post-acceptance, computational quality control techniques allow parameterizing the whole crowdsourcing process of workers, tasks, and labels, inferring and revealing relationships between them. In this paper, we demonstrate Crowd-Kit, a general-purpose crowdsourcing computational quality control toolkit. It provides efficient implementations in Python of computational quality control algorithms for crowdsourcing, including uncertainty measures and crowd consensus methods. We focus on aggregation methods for all the major annotation tasks, from the categorical annotation in which latent label assumption is met to more complex tasks like image and sequence aggregation. We perform an extensive evaluation of our toolkit on several datasets of different nature, enabling benchmarking computational quality control methods in a uniform, systematic, and reproducible way using the same codebase. We release our code and data under an open-source license at https://github.com/Toloka/crowd-kit.
翻译:质量控制是众包的柱石。 虽然质量控制的大部分手段是组织性的,意味着工人的选择、黄金任务和接受后的任务,但计算质量控制技术允许将工人、任务和标签的整个众包过程、任务和标签的参数化,推断和揭示他们之间的关系。在本文中,我们展示了通用的众包计算质量控制工具包Crowd-Kit,这是一个通用的众包计算质量控制工具包。它为在Python实施众包的计算质量控制算法提供了有效的实施,包括不确定性措施和人群共识方法。我们侧重于所有主要批注任务的汇总方法,从满足潜在标签假设的绝对注解到图像和序列汇总等更为复杂的任务。我们对关于不同性质的若干数据集的工具包进行了广泛的评估,从而能够以统一、系统和可复制的方式将计算质量控制方法基准化为同一代码库。我们在http://github.com/Toloka/crowd-kit的公开源许可下发布了我们的代码和数据。