When designing clustering algorithms, the choice of initial centers is crucial for the quality of the learned clusters. In this paper, we develop a new initialization scheme, called HST initialization, for the $k$-median problem in the general metric space (e.g., discrete space induced by graphs), based on the construction of metric embedding tree structure of the data. From the tree, we propose a novel and efficient search algorithm, for good initial centers that can be used subsequently for the local search algorithm. Our proposed HST initialization can produce initial centers achieving lower errors than those from another popular initialization method, $k$-median++, with comparable efficiency. The HST initialization can also be extended to the setting of differential privacy (DP) to generate private initial centers. We show that the error from applying DP local search followed by our private HST initialization improves previous results on the approximation error, and approaches the lower bound within a small factor. Experiments justify the theory and demonstrate the effectiveness of our proposed method. Our approach can also be extended to the $k$-means problem.
翻译:在设计集成算法时,选择初始中心对于学习到的群集的质量至关重要。 在本文中,我们开发了一个新的初始化方案,称为HST初始化方案,用于在构建数据嵌入树结构的衡量标准的基础上,在一般计量空间(例如,由图形引发的离散空间)中,解决美元中位问题。从树上,我们建议一种创新和有效的搜索算法,用于随后可用于本地搜索算法的良好初始中心。我们提议的HST初始化方案可以产生比另一种流行初始化方法($k$-meden++)差的差错更低的初始中心,并具有类似的效率。HST初始化方案也可以扩展至差异隐私设置(DP),以生成私有的初始中心。我们表明,我们私人的HST初始化在应用DP本地搜索中的错误改进了近似错误的先前结果,并在一个小因素内接近较低的界限。实验可以证明我们拟议方法的理论是正确的,并展示其有效性。我们的方法也可以扩展到$k美元中位问题。