Variational inequalities are a broad and flexible class of problems that includes minimization, saddle point, fixed point problems as special cases. Therefore, variational inequalities are used in a variety of applications ranging from equilibrium search to adversarial learning. Today's realities with the increasing size of data and models demand parallel and distributed computing for real-world machine learning problems, most of which can be represented as variational inequalities. Meanwhile, most distributed approaches has a significant bottleneck - the cost of communications. The three main techniques to reduce both the total number of communication rounds and the cost of one such round are the use of similarity of local functions, compression of transmitted information and local updates. In this paper, we combine all these approaches. Such a triple synergy did not exist before for variational inequalities and saddle problems, nor even for minimization problems. The methods presented in this paper have the best theoretical guarantees of communication complexity and are significantly ahead of other methods for distributed variational inequalities. The theoretical results are confirmed by adversarial learning experiments on synthetic and real datasets.
翻译:差异性不平等是一个广泛而灵活的问题类别,包括最小化、马鞍点、固定点问题等特殊案例,因此,在从均衡搜索到对抗学习等各种应用中都使用了差异性不平等。当今现实中,数据和模型规模不断扩大,要求对现实世界的机器学习问题进行平行计算和分配计算,其中多数可称为差异性不平等。与此同时,大多数分布式方法都有一个很大的瓶颈----通信成本。减少通信周期总数和其中一轮成本的三大主要技术是使用相似的地方功能、压缩传送的信息和当地更新。在本文件中,我们综合了所有这些方法。这种三重协同在变化性不平等和负担问题甚至尽量减少问题之前并不存在。本文介绍的方法具有通信复杂性的最佳理论保证,而且大大领先于分布式不平等的其他方法。关于合成和真实数据集的对抗性学习实验证实了理论结果。