Fairness is crucial for neural networks which are used in applications with important societal implication. Recently, there have been multiple attempts on improving fairness of neural networks, with a focus on fairness testing (e.g., generating individual discriminatory instances) and fairness training (e.g., enhancing fairness through augmented training). In this work, we propose an approach to formally verify neural networks against fairness, with a focus on independence-based fairness such as group fairness. Our method is built upon an approach for learning Markov Chains from a user-provided neural network (i.e., a feed-forward neural network or a recurrent neural network) which is guaranteed to facilitate sound analysis. The learned Markov Chain not only allows us to verify (with Probably Approximate Correctness guarantee) whether the neural network is fair or not, but also facilities sensitivity analysis which helps to understand why fairness is violated. We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness. Our approach has been evaluated with multiple models trained on benchmark datasets and the experiment results show that our approach is effective and efficient.
翻译:公平是神经网络的关键。 最近,人们多次试图改善神经网络的公平性,重点是公平测试(例如,产生个别歧视事件)和公平培训(例如,通过强化培训加强公平性)。在这项工作中,我们提出一种办法,正式核查神经网络是否公平,重点是基于独立性的公平性,如群体公平。我们的方法是建立在从用户提供的神经网络(例如,供养神经网络或经常性神经网络)学习Markov 链的方法之上,该方法有保障地促进健全的分析。学过的Markov链不仅允许我们核查神经网络是否公平(可能具有近似正确性的保证),还允许我们核查有助于理解为什么违反公平性的原因的敏感性分析。我们通过分析结果证明,神经权重可以优化,以提高公平性。我们的方法已经用在基准数据集方面受过培训的多种模型进行了评估,实验结果显示我们的方法是有效和高效的。