Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed. However, how to take action to address these patterns is not always clear. In a collaboration between ML and human-computer interaction researchers, physicians, and data scientists, we develop GAM Changer, the first interactive system to help domain experts and data scientists easily and responsibly edit Generalized Additive Models (GAMs) and fix problematic patterns. With novel interaction techniques, our tool puts interpretability into action--empowering users to analyze, validate, and align model behaviors with their knowledge and values. Physicians have started to use our tool to investigate and fix pneumonia and sepsis risk prediction models, and an evaluation with 7 data scientists working in diverse domains highlights that our tool is easy to use, meets their model editing needs, and fits into their current workflows. Built with modern web technologies, our tool runs locally in users' web browsers or computational notebooks, lowering the barrier to use. GAM Changer is available at the following public demo link: https://interpret.ml/gam-changer.
翻译:机器学习(ML)解释技术可以揭示模型用来预测一旦部署后可能造成的伤害的数据中的不良模式。然而,如何采取行动解决这些模式并不总清楚。在ML与人体-计算机互动研究人员、医生和数据科学家的协作下,我们开发了GAM Changer,这是帮助域专家和数据科学家方便和负责任地编辑通用Additive模型(GAMs)和纠正问题模式的第一个互动系统。有了新颖的互动技术,我们的工具将可解释性引入行动-增强用户分析、验证和调整模型行为及其知识和价值的能力。医生已开始使用我们的工具来调查和修正肺炎和肺炎风险预测模型,以及由7名在不同领域工作的数据科学家进行评估,突出表明我们的工具很容易使用,满足其模式编辑需求,并适合当前工作流程。我们的工具以现代网络技术为基础,在用户的网络浏览器或计算笔记本中本地运行,降低了使用的障碍。 GAM Changer在以下的公共演示链接上提供: https://interpreat.ml/garrchanger。