Problem statement: Standardisation of AI fairness rules and benchmarks is challenging because AI fairness and other ethical requirements depend on multiple factors such as context, use case, type of the AI system, and so on. In this paper, we elaborate that the AI system is prone to biases at every stage of its lifecycle, from inception to its usage, and that all stages require due attention for mitigating AI bias. We need a standardised approach to handle AI fairness at every stage. Gap analysis: While AI fairness is a hot research topic, a holistic strategy for AI fairness is generally missing. Most researchers focus only on a few facets of AI model-building. Peer review shows excessive focus on biases in the datasets, fairness metrics, and algorithmic bias. In the process, other aspects affecting AI fairness get ignored. The solution proposed: We propose a comprehensive approach in the form of a novel seven-layer model, inspired by the Open System Interconnection (OSI) model, to standardise AI fairness handling. Despite the differences in the various aspects, most AI systems have similar model-building stages. The proposed model splits the AI system lifecycle into seven abstraction layers, each corresponding to a well-defined AI model-building or usage stage. We also provide checklists for each layer and deliberate on potential sources of bias in each layer and their mitigation methodologies. This work will facilitate layer-wise standardisation of AI fairness rules and benchmarking parameters.
翻译:问题说明:AI公平规则和基准的标准化具有挑战性,因为AI公平和其他道德要求取决于多种因素,例如背景、使用案例、AI系统的类型等。在本文件中,我们阐明,AI系统在其生命周期的每个阶段,从开始到使用,都容易出现偏向,所有阶段都需要适当注意减轻AI的偏向。我们需要一种标准化的方法来处理AI的公平问题。 差距分析:尽管AI公平是一个热门的研究专题,但AI公平的整体战略一般都缺乏。大多数研究人员只关注AI模型建设的几个方面。同侪审查显示,在数据集、公平尺度和算法偏重于偏向性。在这一过程中,影响AI公平的其他方面被忽视。提出的解决办法是:我们提出一种七层新颖的模式,在开放系统相互联系模式的启发下,使AI公平处理标准化。尽管在各个方面存在差异,但大多数AI系统都有类似的模式建设阶段。拟议的模型将AI系统生命周期分为七个抽象的层次,每个阶段都对影响AI的公平性进行过度的偏向,每个阶段,每个阶段都要对影响AI的其他方面进行相应的评估。