In this paper, we propose a sampling algorithm based on statistical machine learning to obtain conditional nonlinear optimal perturbation (CNOP), which is essentially different from the traditional deterministic optimization methods. The new approach does not only reduce the extremely expensive gradient (first-order) information directly by the objective value (zeroth-order) information, but also avoid the use of adjoint technique that gives rise to the huge storage problem and the instability from linearization. Meanwhile, an intuitive anlysis and a rigorous concentration inequality for the approximate gradient by sampling are shown. The numerical experiments to obtain the CNOPs by the performance of standard spatial sturctures for a theoretical model, Burgers equation with small viscosity, demonstrate that at the cost of losing accuracy, fewer samples spend time relatively shorter than the adjoint-based method and directly from definition. Finally, we reveal that the nonlinear time evolution of the CNOPs obtained by all the algorithms are almost consistent with the quantity of norm square of perturbations, their difference and relative difference on the basis of the definition method.
翻译:在本文中,我们提出基于统计机器学习的抽样算法,以获得有条件的非线性最佳扰动(CNOP),这与传统的确定性优化方法基本不同。新的方法不仅通过客观值(零级)信息直接减少极其昂贵的梯度(一级)信息,而且避免使用引起巨大的储存问题和线性化不稳定的联手技术。与此同时,显示了通过抽样对大约梯度进行直观的解析和严格的集中不平等。通过理论模型标准空间构造(Burgerers等方程式)进行的数字实验表明,以降低准确性为代价,较少的样本花费的时间比基于共用法的方法和定义的直接时间要短。最后,我们发现,所有算法获得的CNOP的非线性时间演变几乎符合正常的扰动平方的数量、它们的差异以及定义方法的相对差异。