We propose a family of lossy integer compressions for Stochastic Gradient Descent (SGD) that do not communicate a single float. This is achieved by multiplying floating-point vectors with a number known to every device and then rounding to an integer number. Our theory shows that the iteration complexity of SGD does not change up to constant factors when the vectors are scaled properly. Moreover, this holds for both convex and non-convex functions, with and without overparameterization. In contrast to other compression-based algorithms, ours preserves the convergence rate of SGD even on non-smooth problems. Finally, we show that when the data is significantly heterogeneous, it may become increasingly hard to keep the integers bounded and propose an alternative algorithm, IntDIANA, to solve this type of problems.
翻译:我们建议为悬浮渐变源(SGD)建立一个不传递单一浮点的损耗整整数压缩组(SGD ) 。 这是通过将浮点矢量乘以每个设备已知的数字,然后四舍五入到一个整数来实现的。 我们的理论表明,当矢量适当缩放时,SGD的迭代复杂性不会改变为恒定因素。 此外,这既包括电流和非电流函数,也包括不设超计数的功能。 与其他压缩算法相比, 我们的算法保持SGD的趋同率, 甚至在非悬浮问题上也是如此。 最后, 我们显示,当数据极异时, 将整数捆绑起来并提议一种替代算法( IntDIANA) 来解决这类问题可能会变得越来越困难。