The increasing opaqueness of AI and its growing influence on our digital society highlight the necessity for AI-based systems that are trustworthy, accountable, and fair. Previous research emphasizes explainability as a means to achieve these properties. In this paper, we argue that system explainability cannot be achieved without accounting for the underlying hardware on which all digital systems - including AI applications - are realized. As a remedy, we propose the concept of explainable hardware, and focus on chips - which are particularly relevant to current geopolitical discussions on (trustworthy) semiconductors. Inspired by previous work on Explainable Artificial Intelligence (XAI), we develop a hardware explainability framework by identifying relevant stakeholders, unifying existing approaches form hardware manufacturing under the notion of explainability, and discussing their usefulness to satisfy different stakeholders' needs. Our work lays the foundation for future work and structured debates on explainable hardware.
翻译:AI的日益不透明及其对我们数字社会的影响日益增大,突出表明基于AI的系统必须可信、负责和公平。以前的研究强调解释性是实现这些特性的一种手段。在本文中,我们争论说,不考虑所有数字系统(包括AI应用软件)得以实现的基本硬件,就无法实现系统解释性。作为一种补救措施,我们提出了可解释硬件的概念,并将重点放在芯片上,这与当前关于(可信赖的)半导体的地缘政治讨论特别相关。在以前关于可解释性人工智能(XAI)的工作的启发下,我们制定了一个硬件解释性框架,确定相关的利益攸关方,统一现有方法构成解释性概念下的硬件制造,并讨论这些方法对于满足不同利益攸关方的需要的有用性。我们的工作为未来工作奠定了基础,并就可解释性硬件展开有条理的辩论。</s>