Sliced Wasserstein distances preserve properties of classic Wasserstein distances while being more scalable for computation and estimation in high dimensions. The goal of this work is to quantify this scalability from three key aspects: (i) empirical convergence rates; (ii) robustness to data contamination; and (iii) efficient computational methods. For empirical convergence, we derive fast rates with explicit dependence of constants on dimension, subject to log-concavity of the population distributions. For robustness, we characterize minimax optimal, dimension-free robust estimation risks, and show an equivalence between robust sliced 1-Wasserstein estimation and robust mean estimation. This enables lifting statistical and algorithmic guarantees available for the latter to the sliced 1-Wasserstein setting. Moving on to computational aspects, we analyze the Monte Carlo estimator for the average-sliced distance, demonstrating that larger dimension can result in faster convergence of the numerical integration error. For the max-sliced distance, we focus on a subgradient-based local optimization algorithm that is frequently used in practice, albeit without formal guarantees, and establish an $O(\epsilon^{-4})$ computational complexity bound for it. Our theory is validated by numerical experiments, which altogether provide a comprehensive quantitative account of the scalability question.
翻译:Sliced Vasserstein 距离保留了典型的瓦塞斯坦距离的特性,而对于高维度的计算和估计则比较容易伸缩。这项工作的目标是从三个关键方面量化这一可伸缩性:(一) 经验趋同率;(二) 数据污染的稳健性;和(三) 高效的计算方法。关于经验趋同,我们得出快速的速率,对尺寸的常数有明显依赖性,但需视人口分布的日志协调情况而定。关于稳健性,我们把以微缩最大最优化、无维度强的估算风险定性为特征,并显示强力的1 - Wasserstein 估计和稳健平均值的估算之间的等值。这样可以使后者在1 - Wasserstein 设置的切片的统计和算法保障得以提升。在计算方面,我们分析了平均距离的蒙特卡洛估计值,表明较大的维度可以导致数字整合误差的更快的趋同性。关于最大距离,我们侧重于基于亚级的地方优化的本地优化的优化算算法,尽管没有正式保证,但建立一个用于量化的精确的精确的精确的计算。