This work addresses the problems of (a) designing utilization measurements of trained artificial intelligence (AI) models and (b) explaining how training data are encoded in AI models based on those measurements. The problems are motivated by the lack of explainability of AI models in security and safety critical applications, such as the use of AI models for classification of traffic signs in self-driving cars. We approach the problems by introducing theoretical underpinnings of AI model utilization measurement and understanding patterns in utilization-based class encodings of traffic signs at the level of computation graphs (AI models), subgraphs, and graph nodes. Conceptually, utilization is defined at each graph node (computation unit) of an AI model based on the number and distribution of unique outputs in the space of all possible outputs (tensor-states). In this work, utilization measurements are extracted from AI models, which include poisoned and clean AI models. In contrast to clean AI models, the poisoned AI models were trained with traffic sign images containing systematic, physically realizable, traffic sign modifications (i.e., triggers) to change a correct class label to another label in a presence of such a trigger. We analyze class encodings of such clean and poisoned AI models, and conclude with implications for trojan injection and detection.
翻译:这项工作解决了以下问题:(a) 设计经过训练的人工智能(AI)模型的利用量衡量方法;(b) 解释根据这些测量方法在AI模型中如何将培训数据编码成AI模型;由于在安保和安全关键应用方面缺乏对AI模型的解释性,例如使用AI模型对自驾驶汽车的交通标志进行分类,例如使用AI模型对自驾驶汽车的交通标志进行分类;我们通过采用AI模型利用量衡量和理解在计算图表(AI模型)、子图谱和图表节点一级对交通标志进行基于使用级编码的理论依据和理解来解决这些问题。从概念上讲,在每一个图形节点(计算单位)界定了AI模型的利用量,该模型基于所有可能产出空间(10个状态)的独特产出的数量和分布没有解释性;在这项工作中,从AI模型中提取了利用量,其中包括有毒和清洁的AI模型。与清洁的AI模型相比,有毒的AI模型经过培训,含有系统、实际可变现、交通标志修改(即触发器)的交通标志改变的交通标志,以便将正确的类标签改成另一个标签,以具有毒理学和毒剂的标签。