(Artificial) neural networks have become increasingly popular in mechanics as means to accelerate computations with model order reduction techniques and as universal models for a wide variety of materials. However, the major disadvantage of neural networks remains: their numerous parameters are challenging to interpret and explain. Thus, neural networks are often labeled as black boxes, and their results often elude human interpretation. In mechanics, the new and active field of physics-informed neural networks attempts to mitigate this disadvantage by designing deep neural networks on the basis of mechanical knowledge. By using this a priori knowledge, deeper and more complex neural networks became feasible, since the mechanical assumptions could be explained. However, the internal reasoning and explanation of neural network parameters remain mysterious. Complementary to the physics-informed approach, we propose a first step towards a physics-informing approach, which explains neural networks trained on mechanical data a posteriori. This novel explainable artificial intelligence approach aims at elucidating the black box of neural networks and their high-dimensional representations. Therein, the principal component analysis decorrelates the distributed representations in cell states of RNNs and allows the comparison to known and fundamental functions. The novel approach is supported by a systematic hyperparameter search strategy that identifies the best neural network architectures and training parameters. The findings of three case studies on fundamental constitutive models (hyperelasticity, elastoplasticity, and viscoelasticity) imply that the proposed strategy can help identify numerical and analytical closed-form solutions to characterize new materials.
翻译:神经网络在机械学方面越来越受欢迎,作为加速计算模型减少秩序技术和广泛材料的通用模型,神经网络在机械学方面越来越受欢迎。然而,神经网络的主要缺点仍然是:其许多参数都难以解释和解释。因此,神经网络往往被贴上黑盒子标签,其结果往往不为人理解。在机械学方面,物理知情神经网络的新的活跃领域试图通过在机械学知识的基础上设计深层神经网络来减轻这一缺点。通过利用这一先验知识,更深层次和更复杂的神经网络变得可行,因为机械学假设是可以解释的。然而,神经网络参数的内部解释仍然具有神秘性。对物理学知情方法的补充,我们提出采取物理上不正规化方法的第一步,该方法将受过机械数据培训的神经网络作为事后解释。这种新颖的人工智能方法的目的是通过机械学网络的黑盒子及其高维度表现来消除这一缺点。因此,主要组成部分分析可以把分布在RNNIS的细胞状态的分布式神经网络参数和解释性新参数解释性分析方法加以比较,并允许将最精确的内基的内基结构学研究加以比较。