Artificial intelligence systems are increasingly deployed in domains that shape human behaviour, institutional decision-making, and societal outcomes. Existing responsible AI and governance efforts provide important normative principles but often lack enforceable engineering mechanisms that operate throughout the system lifecycle. This paper introduces the Social Responsibility Stack (SRS), a six-layer architectural framework that embeds societal values into AI systems as explicit constraints, safeguards, behavioural interfaces, auditing mechanisms, and governance processes. SRS models responsibility as a closed-loop supervisory control problem over socio-technical systems, integrating design-time safeguards with runtime monitoring and institutional oversight. We develop a unified constraint-based formulation, introduce safety-envelope and feedback interpretations, and show how fairness, autonomy, cognitive burden, and explanation quality can be continuously monitored and enforced. Case studies in clinical decision support, cooperative autonomous vehicles, and public-sector systems illustrate how SRS translates normative objectives into actionable engineering and operational controls. The framework bridges ethics, control theory, and AI governance, providing a practical foundation for accountable, adaptive, and auditable socio-technical AI systems.
翻译:人工智能系统正日益部署于塑造人类行为、机构决策和社会结果的领域。现有的负责任人工智能与治理工作提供了重要的规范性原则,但往往缺乏在整个系统生命周期内可执行的工程机制。本文提出了社会责任栈(SRS),一个六层架构框架,将社会价值嵌入人工智能系统,作为明确的约束、保障措施、行为接口、审计机制和治理流程。SRS将责任建模为社会技术系统上的闭环监督控制问题,将设计时保障与运行时监控及机构监督相结合。我们开发了统一的基于约束的表述,引入了安全包络和反馈解释,并展示了如何持续监控和执行公平性、自主性、认知负担及解释质量。在临床决策支持、协作式自动驾驶车辆和公共部门系统中的案例研究表明,SRS如何将规范性目标转化为可操作的工程与运行控制。该框架连接了伦理学、控制论和人工智能治理,为可问责、自适应和可审计的社会技术人工智能系统提供了实用基础。