The growing number of AI applications, also for high-stake decisions, increases the interest in Explainable and Interpretable Machine Learning (XI-ML). This trend can be seen both in the increasing number of regulations and strategies for developing trustworthy AI and the growing number of scientific papers dedicated to this topic. To ensure the sustainable development of AI, it is essential to understand the dynamics of the impact of regulation on research papers as well as the impact of scientific discourse on AI-related policies. This paper introduces a novel framework for joint analysis of AI-related policy documents and eXplainable Artificial Intelligence (XAI) research papers. The collected documents are enriched with metadata and interconnections, using various NLP methods combined with a methodology inspired by Institutional Grammar. Based on the information extracted from collected documents, we showcase a series of analyses that help understand interactions, similarities, and differences between documents at different stages of institutionalization. To the best of our knowledge, this is the first work to use automatic language analysis tools to understand the dynamics between XI-ML methods and regulations. We believe that such a system contributes to better cooperation between XAI researchers and AI policymakers.
翻译:越来越多的大赦国际应用,也用于作出高度决策,增加了人们对可解释和可解释的机器学习(XI-ML)的兴趣。这一趋势表现在:发展可信赖的大赦国际的条例和战略越来越多,专门为这一专题编写的科学论文越来越多。为了确保大赦国际的可持续发展,至关重要的是要了解关于研究论文的条例的影响动态以及科学讨论对与大赦国际有关的政策的影响。本文件为联合分析与大赦国际有关的政策文件和可复制人工智能(XAI)研究论文提出了一个新的框架。所收集的文件用元数据和互连性丰富了元数据和互连性,使用了各种国家实验室规划方法,以及机构文法所启发的方法。根据从收集的文件中提取的信息,我们展示了一系列有助于理解不同体制化阶段文件之间相互作用、相似性和差异的分析。我们最了解的是,这是使用自动语言分析工具来理解十一-ML方法和条例之间的动态的首项工作。我们认为,这种系统有助于改进XAI研究人员与大赦国际决策者之间的合作。