Meta-heuristics are powerful tools for solving optimization problems whose structural properties are unknown or cannot be exploited algorithmically. We propose such a meta-heuristic for a large class of optimization problems over discrete domains based on the particle swarm optimization (PSO) paradigm. We provide a comprehensive formal analysis of the performance of this algorithm on certain "easy" reference problems in a black-box setting, namely the sorting problem and the problem OneMax. In our analysis we use a Markov model of the proposed algorithm to obtain upper and lower bounds on its expected optimization time. Our bounds are essentially tight with respect to the Markov model. We show that for a suitable choice of algorithm parameters the expected optimization time is comparable to that of known algorithms and, furthermore, for other parameter regimes, the algorithm behaves less greedy and more explorative, which can be desirable in practice in order to escape local optima. Our analysis provides a precise insight on the tradeoff between optimization time and exploration. To obtain our results we introduce the notion of indistinguishability of states of a Markov chain and provide bounds on the solution of a recurrence equation with non-constant coefficients by integration.
翻译:Metan-heuristics 是解决优化问题的有力工具, 其结构属性未知, 或无法在逻辑上加以利用 。 我们根据粒子群温优化( PSO) 模式, 提议在离散域上出现大量优化问题, 提出这样的超常理论。 我们根据粒子群优化( PSO) 模式, 对黑盒设置中某些“ 容易” 参考问题, 即分类问题和 OneMax 问题, 对这种算法的性能进行了全面的正式分析。 在我们的分析中, 我们使用一个拟议算法的Markov 模型模型来获得预期优化时间的上限和下限。 我们的界限基本上与Markov 模型很接近。 我们显示, 在适当选择算法参数时, 预期的优化时间可以与已知的算法相似, 并且对于其他参数系统来说, 算法行为不那么贪婪, 并且更有趣, 算法在实际中可以避免本地的选法。 我们的分析可以精确地洞察到优化时间和探索之间的利差。 为了获取我们的结果。 我们引入了Markov 。 我们引入了Markov 链状态的分不相的概念,, 并且提供了一种分立概念, 和不分立的分法的分法的分法的分解概念。