We consider a fair representation learning perspective, where optimal predictors, on top of the data representation, are ensured to be invariant with respect to different sub-groups. Specifically, we formulate this intuition as a bi-level optimization, where the representation is learned in the outer-loop, and invariant optimal group predictors are updated in the inner-loop. Moreover, the proposed bi-level objective is demonstrated to fulfill the sufficiency rule, which is desirable in various practical scenarios but was not commonly studied in the fair learning. Besides, to avoid the high computational and memory cost of differentiating in the inner-loop of bi-level objective, we propose an implicit path alignment algorithm, which only relies on the solution of inner optimization and the implicit differentiation rather than the exact optimization path. We further analyze the error gap of the implicit approach and empirically validate the proposed method in both classification and regression settings. Experimental results show the consistently better trade-off in prediction performance and fairness measurement.
翻译:我们考虑公平代表性学习观点,在数据代表制之外,确保最佳预测者对不同分组的预测不变化。具体地说,我们将这种直觉表述为双级优化,在外环中学习代表,而在内环中更新不变化的最佳群体预测者。此外,拟议的双级目标还证明可以实现充足规则,这是各种实际情景中可取的,但在公平学习中并不普遍研究。此外,为了避免在双级目标的内部环流中区分高计算和记忆成本,我们提出了隐含路径调整算法,该算法只依靠内部优化和隐含差异的解决办法,而不是精确优化路径。我们进一步分析了隐含方法的错误差距,并在分类和回归环境中对拟议方法进行了经验上的验证。实验结果显示,预测业绩和公平计量的取舍始终是更好的。