To efficiently solve distributed online learning problems with complicated constraints, previous studies have proposed several distributed projection-free algorithms. The state-of-the-art one achieves the $O({T}^{3/4})$ regret bound with $O(\sqrt{T})$ communication complexity. In this paper, we further exploit the strong convexity of loss functions to improve the regret bound and communication complexity. Specifically, we first propose a distributed projection-free algorithm for strongly convex loss functions, which enjoys a better regret bound of $O(T^{2/3}\log T)$ with smaller communication complexity of $O(T^{1/3})$. Furthermore, we demonstrate that the regret of distributed online algorithms with $C$ communication rounds has a lower bound of $\Omega(T/C)$, even when the loss functions are strongly convex. This lower bound implies that the $O(T^{1/3})$ communication complexity of our algorithm is nearly optimal for obtaining the $O(T^{2/3}\log T)$ regret bound up to polylogarithmic factors. Finally, we extend our algorithm into the bandit setting and obtain similar theoretical guarantees.
翻译:为了在复杂的限制下有效解决分布式在线学习问题,以前的研究提出了几种分布式无投影算法。 最先进的算法实现了美元( {T ⁇ 3/4} ) 美元( $) 的通信复杂性。 在本文中,我们进一步利用损失功能的强烈混结性来改善遗憾和通信的复杂性。 具体地说,我们首先为强烈的 convex损失函数提出了一个分布式无投影算法,这种算法享有更优的遗憾约束是$( T ⁇ 2/3 } T) 美元( $),通信复杂性较小( T ⁇ 1/3 } 美元 。 此外,我们还表明,对使用美元( $C$ ) 的传播式在线算法的遗憾程度较低,即使损失功能十分复杂,我们也要利用美元( T/ C) 美元( $) 。 这一较低约束意味着我们算法的 $O ( T ⁇ 2/3 } $( T) 的通信复杂性几乎是获得美元( $_2/3 ) 的遗憾绑在多元因素上的最佳选择。 最后, 我们的算作了类似的运算法。