Digital Twins (DTs) are increasingly used as autonomous decision-makers in complex socio-technical systems. However, their mathematically optimal decisions often diverge from human expectations, revealing a persistent mismatch between algorithmic and bounded human rationality. This work addresses this challenge by proposing a framework that introduces fairness as a learnable objective within optimization-based Digital Twins. In this respect, a preference-driven learning workflow that infers latent fairness objectives directly from human pairwise preferences over feasible decisions is introduced. A dedicated Siamese neural network is developed to generate convex quadratic cost functions conditioned on contextual information. The resulting surrogate objectives drive the optimization procedure toward solutions that better reflect human-perceived fairness while maintaining computational efficiency. The effectiveness of the approach is demonstrated on a COVID-19 hospital resource allocation scenario. Overall, this work offers a practical solution to integrate human-centered fairness into the design of autonomous decision-making systems.
翻译:数字孪生(DTs)在复杂社会技术系统中作为自主决策者的应用日益广泛。然而,其数学上最优的决策常与人类预期相悖,揭示了算法理性与有限人类理性之间的持续错配。本研究通过提出一个框架来解决这一挑战,该框架将公平性作为基于优化的数字孪生中可学习的目标引入。为此,我们引入了一种偏好驱动的学习工作流,直接从人类对可行决策的成对偏好中推断潜在的公平性目标。我们开发了一个专用的孪生神经网络,用于生成基于上下文信息的凸二次成本函数。所得代理目标驱动优化过程,使解更符合人类感知的公平性,同时保持计算效率。该方法在COVID-19医院资源分配场景中验证了有效性。总体而言,本研究为将人本公平性整合到自主决策系统设计中提供了一种实用解决方案。