Recent AI algorithms are blackbox models whose decisions are difficult to interpret. eXplainable AI (XAI) seeks to address lack of AI interpretability and trust by explaining to customers their AI decision, e.g., decision to reject a loan application. The common wisdom is that regulating AI by mandating fully transparent XAI leads to greater social welfare. This paper challenges this notion through a game theoretic model for a policy-maker who maximizes social welfare, firms in a duopoly competition that maximize profits, and heterogenous consumers. The results show that XAI regulation may be redundant. In fact, mandating fully transparent XAI may make firms and customers worse off. This reveals a trade-off between maximizing welfare and receiving explainable AI outputs. We also discuss managerial implications for policy-maker and firms.
翻译:近期的大赦国际算法是难以解释其决定的黑盒模型。 expable AI(XAI)试图通过向客户解释其AI决定,例如拒绝贷款申请的决定来解决缺乏AI可解释性和信任的问题。 共同的智慧是,通过授权完全透明的XAI来监管AI,可以带来更大的社会福利。本文通过一个游戏理论模型来挑战这一概念,该模型的决策者将社会福利最大化,在双向竞争中实现利润最大化的公司,以及异质消费者。结果显示,XAI监管可能是多余的。事实上,授权完全透明的XAI可能会使公司和客户更加糟糕。这揭示了在最大福利最大化和获得可解释的AI产出之间的交易。我们还讨论了政策制定者和企业的管理影响。