Clustering is a key task in machine learning, with $k$-means being widely used for its simplicity and effectiveness. While 1D clustering is common, existing methods often fail to exploit the structure of 1D data, leading to inefficiencies. This thesis introduces optimized algorithms for $k$-means++ initialization and Lloyd's algorithm, leveraging sorted data, prefix sums, and binary search for improved computational performance. The main contributions are: (1) an optimized $k$-cluster algorithm achieving $O(l \cdot k^2 \cdot \log n)$ complexity for greedy $k$-means++ initialization and $O(i \cdot k \cdot \log n)$ for Lloyd's algorithm, where $l$ is the number of greedy $k$-means++ local trials, and $i$ is the number of Lloyd's algorithm iterations, and (2) a binary search-based two-cluster algorithm, achieving $O(\log n)$ runtime with deterministic convergence to a Lloyd's algorithm local minimum. Benchmarks demonstrate over 4500x speedup compared to scikit-learn for large datasets while maintaining clustering quality measured by within-cluster sum of squares (WCSS). Additionally, the algorithms achieve a 300x speedup in an LLM quantization task, highlighting their utility in emerging applications. This thesis bridges theory and practice for 1D $k$-means clustering, delivering efficient and sound algorithms implemented in a JIT-optimized open-source Python library.
翻译:暂无翻译