Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged. We use the "learn to optimize" (L2O) methodology wherein each inference solves a data-driven optimization problem. Our L2O models are straightforward to implement, directly encode prior knowledge, and yield theoretical guarantees (e.g. satisfaction of constraints). We also propose use of interpretable certificates to verify whether model inferences are trustworthy. Numerical examples are provided in the applications of dictionary-based signal recovery, CT imaging, and arbitrage trading of cryptoassets.
翻译:机器学习(ML)中常见不可辨别的黑盒,但各种应用日益需要可解释的人工智能(XAI)。XAI的核心是建立透明和可解释的数据驱动算法。这项工作为XAI提供了具体工具,在先前的知识必须编码和标出不可信赖的推论的情况下,XAI提供了具体工具。我们使用了“精益优化”(L2O)方法,其中每种推论都解决了数据驱动的优化问题。我们的L2O模型可以直接实施,直接编码先前的知识,并产生理论保证(例如,对限制的满足程度)。我们还提议使用可解释的证书,以核实模型推论是否可信。在基于字典的信号恢复、CT成像和加密资产仲裁交易的应用中提供了数字实例。