The kernel thinning (KT) algorithm of Dwivedi and Mackey (2021) compresses an $n$ point distributional summary into a $\sqrt n$ point summary with better-than-Monte-Carlo maximum mean discrepancy for a target kernel $\mathbf{k}$ by leveraging a less smooth square-root kernel. Here we provide four improvements. First, we show that KT applied directly to the target kernel yields a tighter $\mathcal{O}(\sqrt{\log n/n})$ integration error bound for each function $f$ in the reproducing kernel Hilbert space. This modification extends the reach of KT to any kernel -- even non-smooth kernels that do not admit a square-root, demonstrates that KT is suitable even for heavy-tailed target distributions, and eliminates the exponential dimension-dependence and $(\log n)^{d/2}$ factors of standard square-root KT. Second, we show that, for analytic kernels, like Gaussian and inverse multiquadric, target kernel KT admits maximum mean discrepancy (MMD) guarantees comparable to square-root KT without the need for an explicit square-root kernel. Third, we prove KT with a fractional $\alpha$-power kernel $\mathbf{k}_{\alpha}$ for $\alpha > 1/2$ yields better-than-Monte-Carlo MMD guarantees for non-smooth kernels, like Laplace and \Matern, that do not have square-roots. Fourth, we establish that KT applied to a sum of $\mathbf{k}$ and $\mathbf{k}_{\alpha}$ (a procedure we call KT+) simultaneously inherits the improved MMD guarantees of power KT and the tighter individual function guarantees of KT on the target kernel. Finally, we illustrate the practical benefits of target KT and KT+ for compression after high-dimensional independent sampling and challenging Markov chain Monte Carlo posterior inference.
翻译:Dwivedi 和 Mackey (2021年) 的内核稀释算法( KT), 将一个美元点分布摘要压缩成一个 $sqrt n n$ 点摘要, 且比Monte- Carlo 最大平均值差, 目标内核的 $\ mathbf{k} 。 我们在这里提供四个改进。 首先, 我们显示 KT 直接应用于目标内核产生一个更紧的 $\ mathcal{ O} (\\ trickr n) 。 KT 在再生成的 Hilbert 空间中, 每个函数的内核分配值都比 $ sqrt n$ 点。 这个修改将 KT 的覆盖范围扩大到任何内核, 甚至不接收平方根的无色内核的内核。 这表明 KT甚至适合重尾部的分布, 以及消除标准平底基的基内核(\) 平底基内核的 和基内核的内核的内核- 。