This paper presents a batch-wise density-based clustering approach for local outlier detection in massive-scale datasets. Unlike the well-known traditional algorithms, which assume that all the data is memory-resident, our proposed method is scalable and processes the input data chunk-by-chunk within the confines of a limited memory buffer. A temporary clustering model is built at the first phase; then, it is gradually updated by analyzing consecutive memory loads of points. Subsequently, at the end of scalable clustering, the approximate structure of the original clusters is obtained. Finally, by another scan of the entire dataset and using a suitable criterion, an outlying score is assigned to each object called SDCOR (Scalable Density-based Clustering Outlierness Ratio). Evaluations on real-life and synthetic datasets demonstrate that the proposed method has a low linear time complexity and is more effective and efficient compared to best-known conventional density-based methods, which need to load all data into the memory; and also, to some fast distance-based methods, which can perform on data resident in the disk.
翻译:本文介绍了在大规模数据集中进行局部异常探测的分批密度基群集方法。与假定所有数据均为内存常态的众所周知的传统算法不同,我们提议的方法是可缩放的,在有限的内存缓冲范围内处理输入数据块逐千次。在第一阶段建立临时集集模型;然后,通过分析连续的内存负数逐步更新。随后,在可缩放集的末尾,获得原始组群的近似结构。最后,通过对整个数据集进行另一次扫描并使用适当的标准,为每个称为SDCOR(基于可缩放密度的集群外差率比率)的物体指定了外围分数。对实际寿命和合成数据集的评价表明,拟议方法的线性时间复杂性较低,与最著名的传统密度方法相比,其效率和效益更高,后者需要将所有数据装入内存;此外,对于一些快速的远程方法,可以在磁盘中的数据中进行。