With Artificial intelligence (AI) to aid or automate decision-making advancing rapidly, a particular concern is its fairness. In order to create reliable, safe and trustworthy systems through human-centred artificial intelligence (HCAI) design, recent efforts have produced user interfaces (UIs) for AI experts to investigate the fairness of AI models. In this work, we provide a design space exploration that supports not only data scientists but also domain experts to investigate AI fairness. Using loan applications as an example, we held a series of workshops with loan officers and data scientists to elicit their requirements. We instantiated these requirements into FairHIL, a UI to support human-in-the-loop fairness investigations, and describe how this UI could be generalized to other use cases. We evaluated FairHIL through a think-aloud user study. Our work contributes better designs to investigate an AI model's fairness-and move closer towards responsible AI.
翻译:随着人工智能(AI)帮助或自动化决策的快速发展,一个特别的关切是其公平性。为了通过以人为中心的人工智能(HAAI)设计建立可靠、安全和可信赖的系统,最近的努力为AI专家调查AI模型的公平性提供了用户界面(UIs),在这项工作中,我们提供了设计空间探索,不仅支持数据科学家,也支持领域专家调查AI公平性。以贷款应用为例,我们与贷款官员和数据科学家举办了一系列讲习班,以了解他们的要求。我们将这些要求转录为FairHIL,这是一个支持人与人之间的在线公平性调查的联合信息,并描述了如何将这种联合信息推广到其他使用的案件中。我们通过思考用户研究对FairHIL进行了评估。我们的工作有助于更好的设计,以调查AI模型的公平性并更接近负责任的AI。