Only recently, researchers attempt to provide classification algorithms with provable group fairness guarantees. Most of these algorithms suffer from harassment caused by the requirement that the training and deployment data follow the same distribution. This paper proposes an input-agnostic certified group fairness algorithm, FairSmooth, for improving the fairness of classification models while maintaining the remarkable prediction accuracy. A Gaussian parameter smoothing method is developed to transform base classifiers into their smooth versions. An optimal individual smooth classifier is learnt for each group with only the data regarding the group and an overall smooth classifier for all groups is generated by averaging the parameters of all the individual smooth ones. By leveraging the theory of nonlinear functional analysis, the smooth classifiers are reformulated as output functions of a Nemytskii operator. Theoretical analysis is conducted to derive that the Nemytskii operator is smooth and induces a Frechet differentiable smooth manifold. We theoretically demonstrate that the smooth manifold has a global Lipschitz constant that is independent of the domain of the input data, which derives the input-agnostic certified group fairness.
翻译:直到最近,研究人员才试图提供分类算法,提供可变群体公平保障。这些算法大多因要求培训和部署数据遵循同样的分布而受到骚扰。本文件提议采用输入-不可知的认证群体公平算法FairSmooth,以提高分类模型的公平性,同时保持显著的预测准确性。开发了高斯参数平滑法,将基础分类器转化为光滑的版本。每个组都学习了最佳的个人平稳分类法,只有关于该组的数据,而所有组的整体平稳分类法则通过平均所有个体光滑的参数产生。通过利用非线性功能分析理论,平滑的分类器被重新定位为Nemytskii操作员的输出功能。进行理论分析,以推断Nemytskii操作员是顺畅的,并引出一个可变滑动的平滑体。我们理论上证明,光体的元件有一个独立于输入数据域的全Lipschitz常数,从而获得输入-notic核证的集团公平性。