The notion of omnipredictors (Gopalan, Kalai, Reingold, Sharan and Wieder ITCS 2021), suggested a new paradigm for loss minimization. Rather than learning a predictor based on a known loss function, omnipredictors can easily be post-processed to minimize any one of a rich family of loss functions compared with the loss of a class $C$. It has been shown that such omnipredictors exist and are implied (for all convex and Lipschitz loss functions) by the notion of multicalibration from the algorithmic fairness literature. Nevertheless, it is often the case that the action selected must obey some additional constraints (such as capacity or parity constraints). In itself, the original notion of omnipredictors does not apply in this well-motivated and heavily studied the context of constrained loss minimization. In this paper, we introduce omnipredictors for constrained optimization and study their complexity and implications. The notion that we introduce allows the learner to be unaware of the loss function that will be later assigned as well as the constraints that will be later imposed, as long as the subpopulations that are used to define these constraints are known. The paper shows how to obtain omnipredictors for constrained optimization problems, relying on appropriate variants of multicalibration. For some interesting constraints and general loss functions and for general constraints and some interesting loss functions, we show how omnipredictors are implied by a variant of multicalibration that is similar in complexity to standard multicalibration. We demonstrate that in the general case, standard multicalibration is insufficient and show that omnipredictors are implied by multicalibration with respect to a class containing all the level sets of hypotheses in $C$. We also investigate the implications when the constraints are group fairness notions.
翻译:omnepredictors (Gopalan, Kalai, Reingold, Sharan 和 Wieder ICTS 2021) 的概念提出了一个新的将损失降到最低的范式。 但通常的情况是,所选择的行动必须服从一些额外的限制(如能力或等同限制),而不是根据已知的损失函数学习预测,而后很容易被处理,以尽量减少损失功能中任何富富富富的一大家庭的损耗函数,而不是损失类别C$的损耗。在本文中,我们引入了这种百分数,以限制为标准优化和研究其复杂性和影响。我们引入的理念使得学习者不知道损失功能,而后被分配为隐含的多级公平文献。 通常的情况是,所选择的行动必须服从一些额外的限制(如能力或等等限制 ) 。 最初的omnnididididictors 概念并不适用于这一类别。 我们引入的百分数标准, 其复杂性和含意。 当我们引入该概念时, 能够让学习者不知晓损失功能,, 将最终被指定为默认的。 当我们所指定的损失函数的缩缩缩缩度限制, 将如何定义, 。