We analyse the performance of several iterative algorithms for the quantisation of a probability measure $\mu$, based on the minimisation of a Maximum Mean Discrepancy (MMD). Our analysis includes kernel herding, greedy MMD minimisation and Sequential Bayesian Quadrature (SBQ). We show that the finite-sample-size approximation error, measured by the MMD, decreases as $1/n$ for SBQ and also for kernel herding and greedy MMD minimisation when using a suitable step-size sequence. The upper bound on the approximation error is slightly better for SBQ, but the other methods are significantly faster, with a computational cost that increases only linearly with the number of points selected. This is illustrated by two numerical examples, with the target measure $\mu$ being uniform (a space-filling design application) and with $\mu$ a Gaussian mixture.
翻译:我们根据最大平均值差异的最小化(MMD)分析几套用于计算美元概率的迭代算法的性能。 我们的分析包括内仓、贪婪的MMD最小化和按顺序排列的巴伊西亚方程(SBQ ) 。 我们显示,由MMD测得的有限比例近似误差,SBQ和在使用适当步数序列时内仓储和贪婪的MMMD最小化以每美元计。 近似误差的上限对SBQ略微好一些,但其他方法则要快得多,计算成本仅随着所选点数的线性增加。 我们用两个数字示例说明了这一点,目标计量值$mu$是统一的(一个空间填充版设计应用程序),用$\mu$Gausian混合物以$\mu美元表示。