Density peaks clustering has become a nova of clustering algorithm because of its simplicity and practicality. However, there is one main drawback: it is time-consuming due to its high computational complexity. Herein, a density peaks clustering algorithm with sparse search and K-d tree is developed to solve this problem. Firstly, a sparse distance matrix is calculated by using K-d tree to replace the original full rank distance matrix, so as to accelerate the calculation of local density. Secondly, a sparse search strategy is proposed to accelerate the computation of relative-separation with the intersection between the set of $k$ nearest neighbors and the set consisting of the data points with larger local density for any data point. Furthermore, a second-order difference method for decision values is adopted to determine the cluster centers adaptively. Finally, experiments are carried out on datasets with different distribution characteristics, by comparing with other six state-of-the-art clustering algorithms. It is proved that the algorithm can effectively reduce the computational complexity of the original DPC from $O(n^2K)$ to $O(n(n^{1-1/K}+k))$. Especially for larger datasets, the efficiency is elevated more remarkably. Moreover, the clustering accuracy is also improved to a certain extent. Therefore, it can be concluded that the overall performance of the newly proposed algorithm is excellent.
翻译:密度峰值群集由于其简单和实用性而成为群集算法的一种创新。 然而,有一个主要的缺点:由于计算复杂程度高,它耗费时间。 在这里, 开发了一个密度峰值群集算法, 搜索稀少, K- d 树可以解决这个问题。 首先, 使用 K- d 树来计算一个稀疏的距离矩阵, 以取代原始的全级距离矩阵, 从而加快本地密度的计算。 其次, 提出一个稀疏的搜索战略, 以加速计算离群算法相对偏差, 与最接近的美元邻居和由具有较大本地密度的数据点组成的组合的交叉点之间的相对分离。 此外, 采用了决定值的二阶峰值群集算法, 以适应性地决定群集中心。 最后, 将使用具有不同分布特性的数据集进行实验, 以便与其他六个最先进的群集算法进行比较。 事实证明, 算法可以有效地将原DPC的计算复杂性从$O( n) 到$( n- 1-1/ K) / K Q Q Q Q Q Q) 。 的计算法可以使整个数据组合效率得到更高的改进。 。 。 。 。