Recent \emph{Weak Supervision (WS)} approaches have had widespread success in easing the bottleneck of labeling training data for machine learning by synthesizing labels from multiple potentially noisy supervision sources. However, proper measurement and analysis of these approaches remain a challenge. First, datasets used in existing works are often private and/or custom, limiting standardization. Second, WS datasets with the same name and base data often vary in terms of the labels and weak supervision sources used, a significant "hidden" source of evaluation variance. Finally, WS studies often diverge in terms of the evaluation protocol and ablations used. To address these problems, we introduce a benchmark platform, \benchmark, for a thorough and standardized evaluation of WS approaches. It consists of 22 varied real-world datasets for classification and sequence tagging; a range of real, synthetic, and procedurally-generated weak supervision sources; and a modular, extensible framework for WS evaluation, including implementations for popular WS methods. We use \benchmark to conduct extensive comparisons over more than 100 method variants to demonstrate its efficacy as a benchmark platform. The code is available at \url{https://github.com/JieyuZ2/wrench}.
翻译:最近的 emph{Weak Conservoration (WS)} 方法在通过综合来自多个潜在噪音的监督来源的标签,对机器学习的标签培训数据进行整合,从而放宽标签培训数据的瓶颈方面取得了广泛的成功。然而,对这些方法进行适当的衡量和分析仍是一项挑战。首先,现有工作中使用的数据集往往是私有的和/或习惯的,限制了标准化。第二,具有相同名称和基础数据的WS数据集在所使用的标签和薄弱的监督来源方面往往各不相同,这是一个重要的“隐藏”的评价差异源。最后,WS研究在评估协议和所使用的校准方面往往有差异。为了解决这些问题,我们采用了一个基准平台,即\ bechmark,用于彻底和标准化地评价WS方法。它由22个不同的真实世界数据集组成,用于分类和排序;一系列真实的、合成的和程序上产生的薄弱的监管来源;以及一个模块化的、可扩展的WS评价框架,包括实施流行的WS方法。我们使用\ bekmark 进行超过100种方法范围的广泛比较。J\ 用于示范其功效。