The failure of the Euclidean norm to reliably distinguish between nearby and distant points in high dimensional space is well-known. This phenomenon of distance concentration manifests in a variety of data distributions, with iid or correlated features, including centrally-distributed and clustered data. Unsupervised learning based on Euclidean nearest-neighbors and more general proximity-oriented data mining tasks like clustering, might therefore be adversely affected by distance concentration for high-dimensional applications. While considerable work has been done developing clustering algorithms with reliable high-dimensional performance, the problem of cluster validation--of determining the natural number of clusters in a dataset--has not been carefully examined in high-dimensional problems. In this work we investigate how the sensitivities of common Euclidean norm-based cluster validity indices scale with dimension for a variety of synthetic data schemes, including well-separated and noisy clusters, and find that the overwhelming majority of indices have improved or stable sensitivity in high dimensions. The curse of dimensionality is therefore dispelled for this class of fairly generic data schemes.
翻译:远距集中现象表现在各种数据分布中,有各种有iid或相关特点的数据分布,包括集中分布和集群数据。根据Euclidean近邻和诸如集群等更普遍的近距离数据挖掘任务进行的未经监督的学习,因此可能受到高维应用远距离集中的不利影响。虽然已经做了大量工作,开发了具有可靠高维性功能的集群算法,但确定数据集中组群自然数目的集群验证问题并没有在高维问题中得到仔细研究。在这项工作中,我们调查了基于通用的Euclidean规范集束有效性指数的敏感性,该指数具有各种合成数据计划的层面,包括井分和噪音的集群,并发现绝大多数指数在高维度方面已有改进或稳定敏感度。因此,对于这一类相当通用的数据计划来说,维度的诅咒已被消除。