This paper considers the well-studied algorithmic regime of designing a $(1+\epsilon)$-approximation algorithm for a $k$-clustering problem that runs in time $f(k,\epsilon)poly(n)$ (sometimes called an efficient parameterized approximation scheme or EPAS for short). Notable results of this kind include EPASes in the high-dimensional Euclidean setting for $k$-center [Bad\u{o}iu, Har-Peled, Indyk; STOC'02] as well as $k$-median, and $k$-means [Kumar, Sabharwal, Sen; J. ACM 2010]. However, existing EPASes handle only basic objectives (such as $k$-center, $k$-median, and $k$-means) and are tailored to the specific objective and metric space. Our main contribution is a clean and simple EPAS that settles more than ten clustering problems (across multiple well-studied objectives as well as metric spaces) and unifies well-known EPASes. Our algorithm gives EPASes for a large variety of clustering objectives (for example, $k$-means, $k$-center, $k$-median, priority $k$-center, $\ell$-centrum, ordered $k$-median, socially fair $k$-median aka robust $k$-median, or more generally monotone norm $k$-clustering) and metric spaces (for example, continuous high-dimensional Euclidean spaces, metrics of bounded doubling dimension, bounded treewidth metrics, and planar metrics). Key to our approach is a new concept that we call bounded $\epsilon$-scatter dimension--an intrinsic complexity measure of a metric space that is a relaxation of the standard notion of bounded doubling dimension. Our main technical result shows that two conditions are essentially sufficient for our algorithm to yield an EPAS on the input metric $M$ for any clustering objective: (i) The objective is described by a monotone (not necessarily symmetric!) norm, and (ii) the $\epsilon$-scatter dimension of $M$ is upper bounded by a function of $\epsilon$.
翻译:本文考虑设计运行时间为 $f(k,\epsilon)poly(n)$ 的 $(1+\epsilon)$-近似算法来解决 $k$-clustering 问题,这个问题已经被广泛研究。这种算法称为有效参数近似方案 (efficient parameterized approximation scheme,EPAS)。已知的具有这种特性的算法包括高维欧几里得空间中的 $k$-center [Badoud、Har-Peled、Indyk; STOC'02],以及 $k$-median 和 $k$-means [Kumar、Sabharwal、Sen; J. ACM 2010]等。然而,现有的 EPAS 仅处理基本的目标 (比如 $k$-center、 $k$-median 和 $k$-means),而且针对具体的目标和度量空间设计。本文的主要贡献是提出了一个干净简单的 EPAS,解决了十多个聚类问题 (跨越多个知名的目标和度量空间),同时统一了已知的 EPAS。我们的算法提供了一系列聚类目标的 EPAS (例如 $k$-means、$k$-center、$k$-median、优先 $k$-center、$\ell$-centrum、有序 $k$-median、公平 $k$-median 也称为鲁棒 $k$-median,或更一般的单调范数 $k$-clustering),以及度量空间 (例如连续高维欧几里得空间、有界倍增维度度量、有界树宽度度量和平面度量)。我们方法的关键在于一种新概念「有界 $\epsilon$-散度维数」,这是度量空间的一种内在复杂度度量,是有界加倍维度标准的一种放宽。我们的主要技术结果显示,在两个条件基本上足以使算法在任何聚类目标下对输入度量 $M$ 产生 EPAS:(i) 目标由单调范数描述 (不一定对称!);(ii) $M$ 的 $\epsilon$-散度维数受 $\epsilon$ 的函数上界约束。