Matrix scaling problems with sparse cost matrices arise frequently in various domains, such as optimal transport, image processing, and machine learning. The Sinkhorn-Knopp algorithm is a popular iterative method for solving these problems, but its convergence properties in the presence of sparsity have not been thoroughly analyzed. This paper presents a theoretical analysis of the convergence rate of the Sinkhorn-Knopp algorithm specifically for sparse cost matrices. We derive novel bounds on the convergence rate that explicitly depend on the sparsity pattern and the degree of nonsparsity of the cost matrix. These bounds provide new insights into the behavior of the algorithm and highlight the potential for exploiting sparsity to develop more efficient solvers. We also explore connections between our sparse convergence results and existing convergence results for dense matrices, showing that our bounds generalize the dense case. Our analysis reveals that the convergence rate improves as the matrix becomes less sparse and as the minimum entry of the cost matrix increases relative to its maximum entry. These findings have important practical implications, suggesting that the Sinkhorn-Knopp algorithm may be particularly well-suited for large-scale matrix scaling problems with sparse cost matrices arising in real-world applications. Future research directions include investigating tighter bounds based on more sophisticated sparsity patterns, developing algorithm variants that actively exploit sparsity, and empirically validating the benefits of our theoretical results on real-world datasets. This work advances our understanding of the Sinkhorn-Knopp algorithm for an important class of matrix scaling problems and lays the foundation for designing more efficient and scalable solutions in practice.
翻译:暂无翻译