AI is being increasingly used to aid response efforts to humanitarian emergencies at multiple levels of decision-making. Such AI systems are generally considered as stand-alone for decision support, with ethical assessments, guidelines and frameworks applied to them through this lens. However, as the prevalence of AI increases in this domain, such systems will interact through information flow networks created by interacting decision-making entities, leading to often ill-understood multi-AI complex systems. In this paper we describe how these multi-AI systems can arise, even in relatively simple real-world humanitarian response scenarios, and lead to potentially emergent and erratic erroneous behavior. We discuss how we can better work towards more trustworthy multi-AI systems by exploring some of their associated challenges and opportunities, and how we can design better mechanisms to understand and assess such systems. This paper is designed to be a first exposition on this topic in the field of humanitarian response, raising awareness, exploring the possible landscape of this domain, and providing a starting point for future work within the wider community.
翻译:在多个决策层次上,大赦国际正越来越多地被用来协助应对人道主义紧急情况的努力。这种大赦国际系统一般被视为独立的决策支助系统,通过这一角度对它们适用道德评估、准则和框架。然而,随着大赦国际在这一领域的普及程度的提高,这类系统将通过互动决策实体创建的信息流通网络进行互动,导致往往不为人知的多种大赦国际复杂系统。本文描述了这些多投资者系统如何出现,即使在相对简单的现实世界人道主义反应情况下也是如此,并导致潜在的突发和不规律的错误行为。我们讨论了我们如何通过探索与之相关的一些挑战和机遇,更好地为更值得信赖的多投资者系统开展工作,以及我们如何设计更好的机制来理解和评估这些系统。本文件旨在成为人道主义应急领域关于这一专题的首次阐述,提高认识,探索该领域的可能景观,并为更广泛的社区的未来工作提供一个起点。