Many clustering algorithms are guided by certain cost functions such as the widely-used $k$-means cost. These algorithms divide data points into clusters with often complicated boundaries, creating difficulties in explaining the clustering decision. In a recent work, Dasgupta, Frost, Moshkovitz, and Rashtchian (ICML'20) introduced explainable clustering, where the cluster boundaries are axis-parallel hyperplanes and the clustering is obtained by applying a decision tree to the data. The central question here is: how much does the explainability constraint increase the value of the cost function? Given $d$-dimensional data points, we show an efficient algorithm that finds an explainable clustering whose $k$-means cost is at most $k^{1 - 2/d}\mathrm{poly}(d\log k)$ times the minimum cost achievable by a clustering without the explainability constraint, assuming $k,d\ge 2$. Combining this with an independent work by Makarychev and Shan (ICML'21), we get an improved bound of $k^{1 - 2/d}\mathrm{polylog}(k)$, which we show is optimal for every choice of $k,d\ge 2$ up to a poly-logarithmic factor in $k$. For $d = 2$ in particular, we show an $O(\log k\log\log k)$ bound, improving exponentially over the previous best bound of $\widetilde O(k)$.
翻译:许多群集算法以某些成本函数为指导,例如广泛使用的美元汇率成本。 这些算法将数据点分为往往复杂的边界, 从而给解释群集决定造成困难。 在最近的一项工作中, Dasgupta、 Frost、 Moshkovitz 和 Rashtchian (ICML'20) 引入了可解释的群集, 集边界是轴- 平行超高机体, 集群是通过对数据应用决定树来获得的。 这里的中心问题是: 可解释性限制增加了成本函数的值多少? 鉴于 $- 维数据点, 我们展示了一个高效的算法, 找到一个可以解释的群集, 其美元平均成本最多为 $1 - 2 d\ d\ mathrm{poly} (d\logy} (dkkk) 乘以群集群集的最小成本, 假设是 $k, dge 2 。 将它与 Makarychchev 和 Shan (ICML) 的独立工作合并起来, 我们得到了 $1 - 2/ dgate $ $ mate leglegleom= pressalalal $ $ exal= $ bestal exal exal exal $ 2, exal- col- a eximateal- a eximatealblog) a exive a exim exim exbilusalbilateality.