Decisions made by various Artificial Intelligence (AI) systems greatly influence our day-to-day lives. With the increasing use of AI systems, it becomes crucial to know that they are fair, identify the underlying biases in their decision-making, and create a standardized framework to ascertain their fairness. In this paper, we propose a novel Fairness Score to measure the fairness of a data-driven AI system and a Standard Operating Procedure (SOP) for issuing Fairness Certification for such systems. Fairness Score and audit process standardization will ensure quality, reduce ambiguity, enable comparison and improve the trustworthiness of the AI systems. It will also provide a framework to operationalise the concept of fairness and facilitate the commercial deployment of such systems. Furthermore, a Fairness Certificate issued by a designated third-party auditing agency following the standardized process would boost the conviction of the organizations in the AI systems that they intend to deploy. The Bias Index proposed in this paper also reveals comparative bias amongst the various protected attributes within the dataset. To substantiate the proposed framework, we iteratively train a model on biased and unbiased data using multiple datasets and check that the Fairness Score and the proposed process correctly identify the biases and judge the fairness.
翻译:各种人工智能系统(AI)做出的决定极大地影响了我们的日常生活,随着对AI系统的日益使用,了解这些系统是公平的,查明其决策中的基本偏见,并建立一个标准化框架以确定其公平性,这一点变得至关重要。在本文件中,我们提出了一个新的公平性评分,以衡量数据驱动的AI系统和为这些系统发放公平性认证的标准操作程序(SOP)的公平性。公平性评分和审计程序标准化将确保质量,减少模糊性,便于比较并提高AI系统的信任度。它还将提供一个框架,使公平性概念付诸实施,便利这些系统的商业部署。此外,由指定的第三方审计机构在标准化程序之后颁发的公平性证书将增强各组织在AI系统内打算部署的信念。本文件提议的Bias指数还揭示了在数据集内各种受保护属性之间的相对偏差。为了证实拟议的框架,我们反复地用多个数据集来培训关于偏向和公正数据的模型,并检查公平性评分和拟议程序正确确定偏向和判断公平性。