Loss minimization is a dominant paradigm in machine learning, where a predictor is trained to minimize some loss function that depends on an uncertain event (e.g., "will it rain tomorrow?''). Different loss functions imply different learning algorithms and, at times, very different predictors. While widespread and appealing, a clear drawback of this approach is that the loss function may not be known at the time of learning, requiring the algorithm to use a best-guess loss function. We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action. We introduce the notion of an (${\mathcal{L}},\mathcal{C}$)-omnipredictor, which could be used to optimize any loss in a family ${\mathcal{L}}$. Once the loss function is set, the outputs of the predictor can be post-processed (a simple univariate data-independent transformation of individual predictions) to do well compared with any hypothesis from the class $\mathcal{C}$. The post processing is essentially what one would perform if the outputs of the predictor were true probabilities of the uncertain events. In a sense, omnipredictors extract all the predictive power from the class $\mathcal{C}$, irrespective of the loss function in $\mathcal{L}$. We show that such "loss-oblivious'' learning is feasible through a connection to multicalibration, a notion introduced in the context of algorithmic fairness. In addition, we show how multicalibration can be viewed as a solution concept for agnostic boosting, shedding new light on past results. Finally, we transfer our insights back to the context of algorithmic fairness by providing omnipredictors for multi-group loss minimization.
翻译:将损失减到最小化是机器学习中的一种主导模式, 在机器学习中, 预言员被训练以尽量减少某些取决于不确定事件( 例如, “ 明天会下雨吗? ” ) 的损耗函数。 不同的损耗函数意味着不同的学习算法, 有时会非常不同的预测值。 虽然广泛而有吸引力, 但这种方法的一个明显的缺点是, 在学习时可能不知道损失函数, 要求算法使用最佳假设损失函数。 我们建议在机器学习时将损失函数减到最小化的严格新模式, 在学习时可以忽略损失函数, 并且只有在决定一项行动时才会加以考虑。 我们引入了多级的损耗函数概念( mathcal{,\ mathcal{C} 。 虽然这个概念可以用来在家族中优化任何损失 $( macal) {L% 。 一旦损失函数被设定, 预测值的输出可以被追溯到后处理( 一个简单的不透明数据变数) 。 通过个人预测的变数变数, 将如何从一个变数的变数 。