To deal with complicated constraints via locally light computations in distributed online learning, a recent study has presented a projection-free algorithm called distributed online conditional gradient (D-OCG), and achieved an $O(T^{3/4})$ regret bound for convex losses, where $T$ is the number of total rounds. However, it requires $T$ communication rounds, and cannot utilize the strong convexity of losses. In this paper, we propose an improved variant of D-OCG, namely D-BOCG, which can attain the same $O(T^{3/4})$ regret bound with only $O(\sqrt{T})$ communication rounds for convex losses, and a better regret bound of $O(T^{2/3}(\log T)^{1/3})$ with fewer $O(T^{1/3}(\log T)^{2/3})$ communication rounds for strongly convex losses. The key idea is to adopt a delayed update mechanism that reduces the communication complexity, and redefine the surrogate loss function in D-OCG for exploiting the strong convexity. Furthermore, we provide lower bounds to demonstrate that the $O(\sqrt{T})$ communication rounds required by D-BOCG are optimal (in terms of $T$) for achieving the $O(T^{3/4})$ regret with convex losses, and the $O(T^{1/3}(\log T)^{2/3})$ communication rounds required by D-BOCG are near-optimal (in terms of $T$) for achieving the $O(T^{2/3}(\log T)^{1/3})$ regret with strongly convex losses up to polylogarithmic factors. Finally, to handle the more challenging bandit setting, in which only the loss value is available, we incorporate the classical one-point gradient estimator into D-BOCG, and obtain similar theoretical guarantees.
翻译:为了通过分布式在线学习的本地光度计算处理复杂的限制,最近的一项研究提出了一个名为“D-OCG”的无投影算法,名为“D-OCG”,并实现了美元(T_Q3/4}) 用于支付 convex损失的无投影算法,美元(T$) 是总回合数。然而,它需要美元(T$) 的通信回合,并且不能使用强烈的隐性损失。在本文中,我们提出一个更好的D-OCG(即D-BOCG)变式,即D-OCG(T_3/4}) 美元(美元) 的无投影算法, 美元(SQ_Q_3} 美元(T_Q_3} 美元), 美元(美元) 折数(T_Q_Q_Q_C} 的通信回合损失比美元(T_Q_Q_Q_Q_Q_Q_Q_BAR}要低, 我们提供更低的Drogretroate lex lex lection lection the lax legil lex lex level level)