This paper studies the algorithmic stability and generalizability of decentralized stochastic gradient descent (D-SGD). We prove that the consensus model learned by D-SGD is $\mathcal{O}{(m/N+1/m+\lambda^2)}$-stable in expectation in the non-convex non-smooth setting, where $N$ is the total sample size of the whole system, $m$ is the worker number, and $1-\lambda$ is the spectral gap that measures the connectivity of the communication topology. These results then deliver an $\mathcal{O}{(1/N+{({(m^{-1}\lambda^2)}^{\frac{\alpha}{2}}+ m^{-\alpha})}/{N^{1-\frac{\alpha}{2}}})}$ in-average generalization bound, which is non-vacuous even when $\lambda$ is closed to $1$, in contrast to vacuous as suggested by existing literature on the projected version of D-SGD. Our theory indicates that the generalizability of D-SGD has a positive correlation with the spectral gap, and can explain why consensus control in initial training phase can ensure better generalization. Experiments of VGG-11 and ResNet-18 on CIFAR-10, CIFAR-100 and Tiny-ImageNet justify our theory. To our best knowledge, this is the first work on the topology-aware generalization of vanilla D-SGD. Code is available at https://github.com/Raiden-Zhu/Generalization-of-DSGD.
翻译:本文研究了分散式随机梯度下降(D-SGD) 的算法稳定性和可概括性。 我们证明D-SGD所学的协商一致模式是$=mathcal{O}(m/N+1/m ⁇ lambda2)}美元,在非convex非sooth环境下,在非cavex 非smooth环境下,美元是整个系统的总样本规模,美元是工人数字,1美元=lambda美元是衡量通信表层连接的光谱差距。这些结果随后提供了美元=mathcal{O}(1/N}(m{(m ⁇ -1 ⁇ lambda2))}(frac_alpha})=美元,在非cavexmxn-frapha(m)环境下,美元是整个系统的总样本规模,美元为1美元,而1美元=lambda美元,与现有D-SG-18D版本的预测文献显示,我们关于S-SG-SAR-SB-C-SBAR-SB-SBAR-SB-SBSBSBSBSBSAA的理论和S-C-C-C-C-C-C-C-C-C-C-CAR-C-C-C-C-C-C-C-CFAR-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C