The evaluation of clustering algorithms can involve running them on a variety of benchmark problems, and comparing their outputs to the reference, ground-truth groupings provided by experts. Unfortunately, many research papers and graduate theses consider only a small number of datasets. Also, the fact that there can be many equally valid ways to cluster a given problem set is rarely taken into account. In order to overcome these limitations, we have developed a framework whose aim is to introduce a consistent methodology for testing clustering algorithms. Furthermore, we have aggregated, polished, and standardised many clustering benchmark dataset collections referred to across the machine learning and data mining literature, and included new datasets of different dimensionalities, sizes, and cluster types. An interactive datasets explorer, the documentation of the Python API, a description of the ways to interact with the framework from other programming languages such as R or MATLAB, and other details are all provided at <https://clustering-benchmarks.gagolewski.com>.
翻译:对集群算法的评价可能涉及在各种基准问题上运行这些算法,并将其产出与专家提供的参考、地面实况分组进行比较。不幸的是,许多研究论文和研究生论文只考虑少量数据集。此外,很少考虑到这样的事实,即可以有许多同样有效的方法将特定问题集成。为了克服这些限制,我们制定了一个框架,目的是采用一致的方法测试集群算法。此外,我们汇总、打印和标准化了机器学习和数据挖掘文献中所提到的许多集群基准数据集集,包括不同维度、大小和组群类型的新数据集。交互式数据集探测器、Python API的文件、R或MATLAB等其他编程语言与框架互动的方式说明,其他细节都载于<https://集群-benchmarks.gaglewski.com>。