Digital Twins (DTs) are increasingly used as autonomous decision-makers in complex socio-technical systems. Their mathematically optimal decisions often diverge from human expectations, exposing a persistent gap between algorithmic and bounded human rationality. This work addresses this gap by proposing a framework that operationalizes fairness as a learnable objective within optimization-based Digital Twins. We introduce a preference-driven learning pipeline that infers latent fairness objectives directly from human pairwise preferences over feasible decisions. A novel Siamese neural network is developed to generate convex quadratic cost functions conditioned on contextual information. The resulting surrogate objectives align optimization outcomes with human-perceived fairness while maintaining computational efficiency. The approach is demonstrated on a COVID-19 hospital resource allocation scenario. This study provides an actionable path toward embedding human-centered fairness in the design of autonomous decision-making systems.
翻译:数字孪生(DTs)在复杂社会技术系统中日益被用作自主决策者。其数学上最优的决策常与人类预期相悖,揭示了算法理性与有限人类理性之间的持续鸿沟。本研究通过提出一个框架来解决这一鸿沟,该框架将公平性实现为基于优化的数字孪生中可学习的目标。我们引入了一种偏好驱动的学习流程,直接从人类对可行决策的成对偏好中推断潜在的公平性目标。开发了一种新颖的孪生神经网络,用于生成基于情境信息的凸二次成本函数。所得代理目标在保持计算效率的同时,使优化结果与人类感知的公平性保持一致。该方法在COVID-19医院资源分配场景中进行了验证。本研究为在自主决策系统设计中嵌入以人为中心的公平性提供了一条可行的路径。