AI-based systems have been used widely across various industries for different decisions ranging from operational decisions to tactical and strategic ones in low- and high-stakes contexts. Gradually the weaknesses and issues of these systems have been publicly reported including, ethical issues, biased decisions, unsafe outcomes, and unfair decisions, to name a few. Research has tended to optimize AI less has focused on its risk and unexpected negative consequences. Acknowledging this serious potential risks and scarcity of re-search I focus on unsafe outcomes of AI. Specifically, I explore this issue from a Human-AI interaction lens during AI deployment. It will be discussed how the interaction of individuals and AI during its deployment brings new concerns, which need a solid and holistic mitigation plan. It will be dis-cussed that only AI algorithms' safety is not enough to make its operation safe. The AI-based systems' end-users and their decision-making archetypes during collaboration with these systems should be considered during the AI risk management. Using some real-world scenarios, it will be highlighted that decision-making archetypes of users should be considered a design principle in AI-based systems.
翻译:各种行业广泛使用基于AI的系统作出不同决定,从业务决定到战术和战略决定,从低、高、低、高的考量,这些系统的弱点和问题已逐渐得到公开报道,包括道德问题、有偏见的决定、不安全的结果和不公平的决定等等。研究倾向于优化AI的减少,侧重于其风险和意想不到的负面后果。认识到这种严重的潜在风险和缺乏对AI不安全结果的重新研究。具体地说,我从AI部署期间的人类-AI互动透镜中探讨这一问题。将讨论个人和AI在部署期间的相互作用如何带来新的关切,这需要有一个坚实和整体的缓解计划。人们将不注意只有AI算法的安全不足以保证其运作的安全。在AI系统与这些系统合作期间,应当考虑AI系统终端用户及其决策型型在与这些系统合作时应该考虑。使用一些现实世界情景,将强调用户的决策型型类应被视为AI系统的设计原则。