Learning under one-sided feedback (i.e., where we only observe the labels for examples we predicted positively on) is a fundamental problem in machine learning -- applications include lending and recommendation systems. Despite this, there has been surprisingly little progress made in ways to mitigate the effects of the sampling bias that arises. We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution. We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically. Our method leverages variance estimation techniques to efficiently learn under uncertainty, offering a more principled alternative compared to existing approaches.
翻译:在片面反馈(即我们只看到标签作为我们积极预测的例子)的学习是机器学习的一个根本问题 -- -- 应用包括贷款和建议系统。尽管如此,在减轻抽样偏差的影响方面进展甚微。我们注重一般线性模型,并表明如果不根据这种抽样偏差作出调整,该模型可能趋于平行,甚至无法与最佳解决办法趋同。我们建议一种适应性方法,配有理论保障,并表明它以经验方式优于几种现有方法。我们的方法利用差异估计技术在不确定性下有效学习,提供了比现有方法更有原则的替代方法。