Not only automation of manufacturing processes but also automation of automation procedures itself become increasingly relevant to automation research. In this context, automated capability assessment, mainly leveraged by deep learning systems driven from 3D CAD data, have been presented. Current assessment systems may be able to assess CAD data with regards to abstract features, e.g. the ability to automatically separate components from bulk goods, or the presence of gripping surfaces. Nevertheless, they suffer from the factor of black box systems, where an assessment can be learned and generated easily, but without any geometrical indicator about the reasons of the system's decision. By utilizing explainable AI (xAI) methods, we attempt to open up the black box. Explainable AI methods have been used in order to assess whether a neural network has successfully learned a given task or to analyze which features of an input might lead to an adversarial attack. These methods aim to derive additional insights into a neural network, by analyzing patterns from a given input and its impact to the network output. Within the NeuroCAD Project, xAI methods are used to identify geometrical features which are associated with a certain abstract feature. Within this work, a sensitivity analysis (SA), the layer-wise relevance propagation (LRP), the Gradient-weighted Class Activation Mapping (Grad-CAM) method as well as the Local Interpretable Model-Agnostic Explanations (LIME) have been implemented in the NeuroCAD environment, allowing not only to assess CAD models but also to identify features which have been relevant for the network decision. In the medium run, this might enable to identify regions of interest supporting product designers to optimize their models with regards to assembly processes.
翻译:不仅制造流程自动化,而且自动化程序自动化本身也越来越与自动化研究相关。在这方面,介绍了主要通过3D CAD数据驱动的深学习系统加以利用的自动化能力评估。目前的评估系统可能能够根据抽象特征评估 CAD数据,例如将部件与散装货物自动分离的能力,或是否存在紧凑表面。然而,这些系统受到黑盒系统因素的影响,因为可以在黑盒系统中学习和产生容易的评估,但是没有关于系统决定原因的任何几何指标。通过使用可解释的 AI (xAI) 方法,我们试图打开黑盒。已经使用了可解释的 AI 方法,以评估神经网络是否成功地学到了给定的任务,或分析输入的特征可能导致对抗性攻击。这些方法的目的是通过分析某一投入的模式及其对网络输出的影响,而只有NeuroCADAD项目中,使用 xAI 方法来识别与某种抽象模型相关的几何特征。在这项工作中,Star-ADA(SA) 机算的精度分析(Siral-ADA) 至GRAA 的精度分析, 其深度分析(SA-ADADADAD) 的精度至GRAD 的精度至GRAD 和GRADA的精度-IMA) 的精度,该级的精度分析,作为GA的精度的精度的精度,可以使其精度-AMA的精度-AMA的精度-I的精度-ADADADAMA的精度到其精度的精度的精度。