Avoiding bias and understanding the real-world consequences of AI-supported decision-making are critical to address fairness and assign accountability. Existing approaches often focus either on technical aspects, such as datasets and models, or on high-level socio-ethical considerations - rarely capturing how these elements interact in practice. In this paper, we apply an information flow-based modeling framework to a real-world recruitment process that integrates automated candidate matching with human decision-making. Through semi-structured stakeholder interviews and iterative modeling, we construct a multi-level representation of the recruitment pipeline, capturing how information is transformed, filtered, and interpreted across both algorithmic and human components. We identify where biases may emerge, how they can propagate through the system, and what downstream impacts they may have on candidates. This case study illustrates how information flow modeling can support structured analysis of fairness risks, providing transparency across complex socio-technical systems.
翻译:避免偏见并理解人工智能辅助决策在现实世界中的后果,对于解决公平性和明确责任归属至关重要。现有方法通常侧重于技术层面(如数据集和模型)或高层社会伦理考量,鲜少能捕捉这些要素在实践中的相互作用。本文采用基于信息流的建模框架,分析一个整合自动化候选人匹配与人类决策的真实招聘流程。通过半结构化利益相关者访谈和迭代建模,我们构建了招聘流程的多层次表征,揭示了信息如何在算法与人工组件间被转换、筛选和解读。我们识别了偏见可能产生的环节、其在系统中的传播路径,以及对候选人可能产生的下游影响。本案例研究表明,信息流建模能够支持对公平性风险的结构化分析,为复杂社会技术系统提供透明度。