Machine learning frameworks such as Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control. This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques such as Bayesian Optimization (BO) and Lipschitz global optimization (LIPO). First, we review the general framework of the model-free control problem, bringing together all methods as black-box optimization problems. Then, we test the control algorithms on three test cases. These are (1) the stabilization of a nonlinear dynamical system featuring frequency cross-talk, (2) the wave cancellation from a Burgers' flow and (3) the drag reduction in a cylinder wake flow. We present a comprehensive comparison to illustrate their differences in exploration versus exploitation and their balance between `model capacity' in the control law definition versus `required complexity'. We believe that such a comparison paves the way toward the hybridization of the various methods, and we offer some perspective on their future development in the literature on flow control problems.
翻译:在流动控制方面,基因方案(GP)和强化学习(RL)等机床学习框架越来越受欢迎。这项工作对这两个系统进行了比较分析,其中参照Bayesian Optimination(BO)和Lipschitz全球优化(LIPO)等全球优化技术,对其中最具代表性的算法作了一些比较分析。首先,我们审查了无模型控制问题的总体框架,将所有方法汇集在一起,例如黑盒优化问题。然后,我们测试了三个测试案例的控制算法。它们是:(1) 稳定了非线性动态系统,以频谱为主,(2) 汉堡的潮流取消,(3) 气瓶后回流减少拖动。我们进行了全面比较,以说明其在勘探与开发方面的差异,以及控制法定义中的“模范能力”与“必要复杂性”之间的平衡。我们认为,这种比较为各种方法的混合化铺平了道路,我们在关于流动控制问题的文献中就这些系统的未来发展提出一些观点。