The paradigm of data programming, which uses weak supervision in the form of rules/labelling functions, and semi-supervised learning, which augments small amounts of labelled data with a large unlabelled dataset, have shown great promise in several text classification scenarios. In this work, we argue that by not using any labelled data, data programming based approaches can yield sub-optimal performances, particularly when the labelling functions are noisy. The first contribution of this work is an introduction of a framework, \model which is a semi-supervised data programming paradigm that learns a \emph{joint model} that effectively uses the rules/labelling functions along with semi-supervised loss functions on the feature space. Next, we also study \modelss which additionally does subset selection on top of the joint semi-supervised data programming objective and \emph{selects} a set of examples that can be used as the labelled set by \model. The goal of \modelss is to ensure that the labelled data can \emph{complement} the labelling functions, thereby benefiting from both data-programming as well as appropriately selected data for human labelling. We demonstrate that by effectively combining semi-supervision, data-programming, and subset selection paradigms, we significantly outperform the current state-of-the-art on seven publicly available datasets. \footnote{The source code is available at \url{https://github.com/ayushbits/Semi-Supervised-LFs-Subset-Selection}}
翻译:数据编程模式使用规则/ 标签功能的薄弱监督, 以及半监督的学习模式, 从而在功能空间上有效地使用规则/ 标签功能以及半监督的损失函数。 在这项工作中, 我们争论说, 数据编程方法通过不使用任何标签数据, 可以产生亚最佳性能, 特别是当标签功能吵闹时。 这项工作的第一个贡献是引入一个框架, 模型是一个半监督的数据编程模式, 它是一个半监督的数据编程模式, 学习一种\emph{ 联合模型} 。 它可以有效地使用规则/ 标签功能, 在功能空间上与半监督的损失函数一起使用。 接下来, 我们还研究在联合的半监督数据编程目标和\emph{ 选择} 上进行分组选择的模范式。 模型的目标是确保标定的数据源能够\ emph{ Communit} 和半监督损失函数一起使用 。 将当前标定的标本- sprecreport S- produbal 数据作为我们所选的标定数据有效展示。