Designing and implementing explainable systems is seen as the next step towards increasing user trust in, acceptance of and reliance on Artificial Intelligence (AI) systems. While explaining choices made by black-box algorithms such as machine learning and deep learning has occupied most of the limelight, systems that attempt to explain decisions (even simple ones) in the context of social choice are steadily catching up. In this paper, we provide a comprehensive survey of explainability in mechanism design, a domain characterized by economically motivated agents and often having no single choice that maximizes all individual utility functions. We discuss the main properties and goals of explainability in mechanism design, distinguishing them from those of Explainable AI in general. This discussion is followed by a thorough review of the challenges one may face when working on Explainable Mechanism Design and propose a few solution concepts to those.
翻译:设计和实施可解释的系统被视为提高用户对人工智能系统的信任、接受和依赖的下一步。解释黑箱算法(如机器学习和深层次学习)所作的选择已经占据了大多数焦点,试图在社会选择的背景下解释决定(即使是简单的决定)的系统正在稳步赶上。在本文件中,我们对机制设计中的可解释性进行了全面调查,这是一个具有经济动机的代理商和往往没有单一选择的域,它使所有个人效用功能最大化。我们讨论了机制设计中可解释性的主要特性和目标,将其与一般可解释的AI的特性和目标区别开来。随后,我们深入讨论了在就可解释的机制设计开展工作时可能遇到的挑战,并提出了一些解决方案概念。