Data-driven modeling in mechanics is evolving rapidly based on recent machine learning advances, especially on artificial neural networks. As the field matures, new data and models created by different groups become available, opening possibilities for cooperative modeling. However, artificial neural networks suffer from catastrophic forgetting, i.e. they forget how to perform an old task when trained on a new one. This hinders cooperation because adapting an existing model for a new task affects the performance on a previous task trained by someone else. The authors developed a continual learning method that addresses this issue, applying it here for the first time to solid mechanics. In particular, the method is applied to recurrent neural networks to predict history-dependent plasticity behavior, although it can be used on any other architecture (feedforward, convolutional, etc.) and to predict other phenomena. This work intends to spawn future developments on continual learning that will foster cooperative strategies among the mechanics community to solve increasingly challenging problems. We show that the chosen continual learning strategy can sequentially learn several constitutive laws without forgetting them, using less data to achieve the same error as standard training of one law per model.
翻译:机械学中的数据驱动模型正在根据最近的机械学习进展,特别是在人工神经网络方面,迅速演变。随着实地的成熟,不同群体创造的新数据和模型已经具备,为合作建模开辟了可能性。然而,人工神经网络遭受灾难性的遗忘,也就是说,在接受新培训时,它们忘记了如何执行旧任务。这妨碍了合作,因为为新任务调整现有模型会影响以前由别人培训的任务的绩效。作者开发了一种持续学习方法,解决这个问题,首次在这里将其应用到固体机械学上。特别是,该方法被应用于经常性神经网络,以预测依赖历史的塑料行为,尽管它可以用于任何其他结构(前瞻、进化等),并预测其他现象。这项工作旨在孕育未来发展,通过持续学习促进机械界之间的合作战略来解决日益具有挑战性的问题。我们表明,所选择的持续学习战略可以连续学习几个构成法律,而不会忘记它们,同时使用较少的数据来达到与按一种模型进行的标准培训相同的错误。