AI is powerful, but it can make choices that result in objective errors, contextually inappropriate outputs, and disliked options. We need AI-resilient interfaces that help people be resilient to the AI choices that are not right, or not right for them. To support this goal, interfaces need to help users notice and have the context to appropriately judge those AI choices. Existing human-AI interaction guidelines recommend efficient user dismissal, modification, or otherwise efficient recovery from AI choices that a user does not like. However, in order to recover from AI choices, the user must notice them first. This can be difficult. For example, when generating summaries of long documents, a system's exclusion of a detail that is critically important to the user is hard for the user to notice. That detail can be hiding in a wall of text in the original document, and the existence of a summary may tempt the user not to read the original document as carefully. Once noticed, judging AI choices well can also be challenging. The interface may provide very little information that contextualizes the choices, and the user may fall back on assumptions when deciding whether to dismiss, modify, or otherwise recover from an AI choice. Building on prior work, this paper defines key aspects of AI-resilient interfaces, illustrated with examples. Designing interfaces for increased AI-resilience of users will improve AI safety, usability, and utility. This is especially critical where AI-powered systems are used for context- and preference-dominated open-ended AI-assisted tasks, like ideating, summarizing, searching, sensemaking, and the reading and writing of text or code.
翻译:暂无翻译