Given the ubiquity of non-separable optimization problems in real worlds, in this paper we analyze and extend the large-scale version of the well-known cooperative coevolution (CC), a divide-and-conquer optimization framework, on non-separable functions. First, we reveal empirical reasons of why decomposition-based methods are preferred or not in practice on some non-separable large-scale problems, which have not been clearly pointed out in many previous CC papers. Then, we formalize CC to a continuous game model via simplification, but without losing its essential property. Different from previous evolutionary game theory for CC, our new model provides a much simpler but useful viewpoint to analyze its convergence, since only the pure Nash equilibrium concept is needed and more general fitness landscapes can be explicitly considered. Based on convergence analyses, we propose a hierarchical decomposition strategy for better generalization, as for any decomposition there is a risk of getting trapped into a suboptimal Nash equilibrium. Finally, we use powerful distributed computing to accelerate it under the multi-level learning framework, which combines the fine-tuning ability from decomposition with the invariance property of CMA-ES. Experiments on a set of high-dimensional functions validate both its search performance and scalability (w.r.t. CPU cores) on a clustering computing platform with 400 CPU cores.
翻译:在真实世界中,非可分优化问题无处不在。针对非可分函数的大规模合作共进(CC)被广泛应用于优化框架之中。在本文中,我们首先阐述了为什么分解方法在某些非可分大规模问题上非常实用的经验原因,这在许多以前的CC论文中并没有清晰地指出。然后,我们通过简化、但不失其本质属性的方式,将CC正式化为一个连续博弈模型。不同于以前关于CC的进化博弈理论,我们的新模型提供了一个更简单但有用的视角来分析其收敛性,因为只需要纯纳什均衡的概念,且可以明确考虑更普遍的适应度景观。基于收敛分析,我们提出了一种分层分解策略,以实现更好的泛化效果,因为对于任何分解,都存在陷入次优纳什均衡的风险。最后,我们使用强大的分布式计算来加速它,在多层学习框架下,结合了分解的微调能力和CMA-ES的不变性属性。在具有400个CPU内核的集群计算平台上进行的实验验证了其搜索性能和可扩展性。