Many stakeholders struggle to make reliances on ML-driven systems due to the risk of harm these systems may cause. Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements. Moreover, such risks in complex ML-driven systems present a special challenge as they are often difficult to foresee, arising over periods of time, across populations, and at scale. These risks often arise not from poor ML development decisions or low performance directly but rather emerge through the interactions amongst ML development choices, the context of model use, environmental factors, and the effects of a model on its target. Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems. In this work, we apply a state-of-the-art systems safety approach to concrete applications of ML with notable social and ethical risks to demonstrate a systematic means for meeting the assurance requirements needed to argue for safe and trustworthy ML in sociotechnical systems.
翻译:由于这些系统可能造成的伤害风险,许多利益攸关者可能难以依靠由多边劳工驱动的系统; 人们对可信度的担忧、意外的社会伤害以及不可接受的社会和道德侵犯破坏了多边劳工进步的希望; 此外,由多边劳工驱动的复杂系统中的这种风险是一个特殊的挑战,因为它们往往难以预见,在一段时间内、在人口之间和规模上产生,这些风险往往不是由于多边劳工发展决策不力或业绩不佳直接造成的,而是通过多边劳工发展选择、模型使用背景、环境因素和模型对目标的影响产生的; 系统安全工程是既定的纪律,既有查明和管理风险的记录,即使在高度兼容的社会技术系统中也是如此; 在这项工作中,我们采用最先进的系统安全方法,具体应用具有显著的社会和道德风险的多边劳工,以证明有系统手段满足在社会技术系统中为安全和可靠的多边劳工辩护所需的保证要求。