We introduce a new Swarm-Based Gradient Descent (SBGD) method for non-convex optimization. The swarm consists of agents, each is identified with a position, $\boldsymbol{x}$, and mass, $m$. The key to their dynamics is communication: masses are being transferred from agents at high ground to low(-est) ground. At the same time, agents change positions with step size, $h=h(\boldsymbol{x},m)$, adjusted to their relative mass: heavier agents proceed with small time-steps in the direction of local gradient, while lighter agents take larger time-steps based on a backtracking protocol. Accordingly, the crowd of agents is dynamically divided between `heavier' leaders, expected to approach local minima, and `lighter' explorers. With their large-step protocol, explorers are expected to encounter improved position for the swarm; if they do, then they assume the role of `heavy' swarm leaders and so on. Convergence analysis and numerical simulations in one-, two-, and 20-dimensional benchmarks demonstrate the effectiveness of SBGD as a global optimizer.
翻译:我们为非 convex 优化采用了一种新的基于 Swarm 的 Swarm- basy 梯子( SBGD) 方法。 群状由物剂组成, 每个物剂都被确定为一个位置, $\ boldsymbol{x} $, 和质量, $m美元。 它们动态的关键是通信: 质量正在从高地的物剂转移到低( 最高) 地面。 与此同时, 物剂会以步数大小改变位置, $h=h=h( boldsymbol{x}, m), 调整为相对质量: 较重的物剂在本地梯度方向上小步步走, 而较轻的物剂则根据回溯性协议采取更大的时间步走。 因此, 物剂群在“ 重型” 领导人、 接近当地迷你和“ 浅度” 探险者之间动态地分隔开来。 与此同时, 探险者会遇到更佳的体位置; 如果他们这样做, 他们就会承担“ 重力领导人” 的作用,, 然后他们将承担 和 。 Convergling 20 和S- gregregyal imill iming iming 20