We analyse the performance of several iterative algorithms for the quantisation of a probability measure $\mu$, based on the minimisation of a Maximum Mean Discrepancy (MMD). Our analysis includes kernel herding, greedy MMD minimisation and Sequential Bayesian Quadrature (SBQ). We show that the finite-sample-size approximation error, measured by the MMD, decreases as $1/n$ for SBQ and also for kernel herding and greedy MMD minimisation when using a suitable step-size sequence. The upper bound on the approximation error is slightly better for SBQ, but the other methods are significantly faster, with a computational cost that increases only linearly with the number of points selected. This is illustrated by two numerical examples, with the target measure $\mu$ being uniform (a space-filling design application) and with $\mu$ a Gaussian mixture. They suggest that the bounds derived in the paper are overly pessimistic, in particular for SBQ. The sources of this pessimism are identified but seem difficult to counter.
翻译:我们根据最大平均值差异的最小化(MMD)分析几套用于计算美元概率的迭代算法的性能。我们的分析包括内仓、贪婪的MMD最小化和按顺序排列的巴伊西亚二次曲线(SBQ ) 。我们显示,由MMD测量的有限比例近似误差,SBQ和在使用适当步数序列时,内仓放牧和贪婪的MMMD最小化的值以1美元/n美元计算。对SBQ来说,近似差的上限略好一些,但其他方法则要快得多,计算成本仅随着所选点数的线性增加。我们用两个数字示例说明了这一点,即目标计量值$mu$(一个空间填充设计应用程序)是统一的,Gaussian混合物以$\mu美元计算。它们表明,文件中的界限过于悲观,特别是SBQ。这种悲观的渊源似乎难以反观。