Governments, industry, and academia have undertaken efforts to identify and mitigate harms in ML-driven systems, with a particular focus on social and ethical risks of ML components in complex sociotechnical systems. However, existing approaches are largely disjointed, ad-hoc and of unknown effectiveness. Systems safety engineering is a well established discipline with a track record of identifying and managing risks in many complex sociotechnical domains. We adopt the natural hypothesis that tools from this domain could serve to enhance risk analyses of ML in its context of use. To test this hypothesis, we apply a "best of breed" systems safety analysis, Systems Theoretic Process Analysis (STPA), to a specific high-consequence system with an important ML-driven component, namely the Prescription Drug Monitoring Programs (PDMPs) operated by many US States, several of which rely on an ML-derived risk score. We focus in particular on how this analysis can extend to identifying social and ethical risks and developing concrete design-level controls to mitigate them.
翻译:政府、产业界和学术界已作出努力,查明和减轻由多边贷款驱动的系统中的损害,特别侧重于复杂社会技术系统中多边贷款组成部分的社会和道德风险,然而,现有办法在很大程度上是脱节、临时性和效力不明的。系统安全工程是一套固定的学科,在许多复杂的社会技术领域有查明和管理风险的记录。我们采用自然假设,即该领域的工具在使用方面有助于加强对多边贷款的风险分析。为检验这一假设,我们将“最佳品种”系统安全分析“系统理论过程分析(STPA)”应用于一个具有多边贷款驱动重要组成部分的特定高后果系统,即由美国许多州运营的处方药物监测方案(PDMPs),其中几个方案依赖多边贷款风险分数。我们尤其注重这一分析如何能够推广到查明社会和道德风险,并发展具体的设计层面的控制措施,以缓解这些风险。