Interactive machine learning (IML) is a field of research that explores how to leverage both human and computational abilities in decision making systems. IML represents a collaboration between multiple complementary human and machine intelligent systems working as a team, each with their own unique abilities and limitations. This teamwork might mean that both systems take actions at the same time, or in sequence. Two major open research questions in the field of IML are: "How should we design systems that can learn to make better decisions over time with human interaction?" and "How should we evaluate the design and deployment of such systems?" A lack of appropriate consideration for the humans involved can lead to problematic system behaviour, and issues of fairness, accountability, and transparency. Thus, our goal with this work is to present a human-centred guide to designing and evaluating IML systems while mitigating risks. This guide is intended to be used by machine learning practitioners who are responsible for the health, safety, and well-being of interacting humans. An obligation of responsibility for public interaction means acting with integrity, honesty, fairness, and abiding by applicable legal statutes. With these values and principles in mind, we as a machine learning research community can better achieve goals of augmenting human skills and abilities. This practical guide therefore aims to support many of the responsible decisions necessary throughout the iterative design, development, and dissemination of IML systems.
翻译:互动机器学习(IML)是一个研究领域,探索如何在决策系统中利用人的能力和计算能力。IML代表着作为一个团队工作的多种互补的人类和机智智能系统之间的合作,每个系统都有独特的能力和局限性。这种团队合作可能意味着两个系统同时或按顺序采取行动。IML领域的两个主要公开研究问题是:“我们应如何设计能够学会与人的互动一起作出更好决策的系统?”和“我们应如何评价这种系统的设计和部署?” 缺乏对所涉人的适当考虑可能导致系统行为有问题,以及公平、问责和透明度问题。因此,我们这项工作的目标是为设计和评价IML系统提供以人为本的指南,同时减轻风险。该指南旨在供负责人类互动的健康、安全和福祉的机器学习实践工作者使用。公共互动的责任意味着以诚信、诚实、公正和遵守适用的法律法规的方式行事。基于这些价值观和原则,我们作为机器研究界在设计、负责任设计技能的过程中,可以更好地实现必要的技能的升级。