Iterative methods are ubiquitous in large-scale scientific computing applications, and a number of approaches based on meta-learning have been recently proposed to accelerate them. However, a systematic study of these approaches and how they differ from meta-learning is lacking. In this paper, we propose a framework to analyze such learning-based acceleration approaches, where one can immediately identify a departure from classical meta-learning. We show that this departure may lead to arbitrary deterioration of model performance. Based on our analysis, we introduce a novel training method for learning-based acceleration of iterative methods. Furthermore, we theoretically prove that the proposed method improves upon the existing methods, and demonstrate its significant advantage and versatility through various numerical applications.
翻译:大规模科学计算应用中普遍存在迭代方法,最近提议了一些基于元学习的方法来加速这些方法。然而,缺乏对这些方法及其与元学习有何区别的系统研究。在本文件中,我们提议了一个框架来分析这种基于学习的加速加速方法,从而可以立即发现偏离传统的元学习。我们表明,这种偏离可能导致模型性能的任意恶化。根据我们的分析,我们采用了一种新的培训方法,以学习为基础加速迭代方法的加速。此外,我们理论上证明,拟议方法改进了现有方法,并通过各种数字应用显示了其显著优势和多功能性。