Existing pruning techniques preserve deep neural networks' overall ability to make correct predictions but may also amplify hidden biases during the compression process. We propose a novel pruning method, Fairness-aware GRAdient Pruning mEthod (FairGRAPE), that minimizes the disproportionate impacts of pruning on different sub-groups. Our method calculates the per-group importance of each model weight and selects a subset of weights that maintain the relative between-group total importance in pruning. The proposed method then prunes network edges with small importance values and repeats the procedure by updating importance values. We demonstrate the effectiveness of our method on four different datasets, FairFace, UTKFace, CelebA, and ImageNet, for the tasks of face attribute classification where our method reduces the disparity in performance degradation by up to 90% compared to the state-of-the-art pruning algorithms. Our method is substantially more effective in a setting with a high pruning rate (99%). The code and dataset used in the experiments are available at https://github.com/Bernardo1998/FairGRAPE
翻译:现有修剪技术保留了深层神经网络进行正确预测的总体能力,但也有可能在压缩过程中扩大隐藏的偏差。 我们提议了一种新型修剪方法,即Fairness-aware Grapdient Prutning method(FairGRAPE),以最大限度地减少对不同子组进行裁剪的不相称影响。 我们的方法计算了每个模型重量的每组重要性,并选择了一组重量的子数,以保持各组之间在裁剪裁中的总重要性的相对比重。 拟议的方法随后以次要值冲刺网络边缘,然后通过更新重要性值重复程序。 我们展示了我们方法在四种不同数据集( Fair Face、UTKFace、CelibA和图像网络)上的有效性,用于面貌属性分类的任务,而我们的方法将性能退化的偏差缩小到90%,与状态的裁剪裁算算算算法相比。 我们的方法在高调速率(99 % ) 的情况下效果要大得多。 实验中使用的代码和数据集在 http:// liptistrairG/Bair1998/Bararararrantar 。