We present a unified technique for sequential estimation of convex divergences between distributions, including integral probability metrics like the kernel maximum mean discrepancy, $\varphi$-divergences like the Kullback-Leibler divergence, and optimal transport costs, such as powers of Wasserstein distances. This is achieved by observing that empirical convex divergences are (partially ordered) reverse submartingales with respect to the exchangeable filtration, coupled with maximal inequalities for such processes. These techniques appear to be complementary and powerful additions to the existing literature on both confidence sequences and convex divergences. We construct an offline-to-sequential device that converts a wide array of existing offline concentration inequalities into time-uniform confidence sequences that can be continuously monitored, providing valid tests or confidence intervals at arbitrary stopping times. The resulting sequential bounds pay only an iterated logarithmic price over the corresponding fixed-time bounds, retaining the same dependence on problem parameters (like dimension or alphabet size if applicable). These results are also applicable to more general convex functionals -- like the negative differential entropy, suprema of empirical processes, and V-Statistics -- and to more general processes satisfying a key leave-one-out property.
翻译:我们提出一种统一的方法,用于对分布之间的共性差异进行顺序估计,包括整体概率度量,如内核最大平均差差值、Kullback-Leebler差差差值等美元和Vasserstein距离等最佳运输成本,这是通过观察到实验性共性差异(部分订购)在可交换过滤方面反向子差数,加上这种过程的最大不平等而实现的。这些技术似乎补充和有力地补充了现有关于信任序列和共性差的文献。我们建造了一个离线至序列的装置,将现有的大量离线集中不平等转化为可以持续监测的时态信任序列,在任意停留时提供有效的测试或信任间隔。由此产生的顺序界限只支付相应固定时间界限的反复对数,对问题参数(如适用时的尺寸或字母大小)保持同样的依赖性。这些结果也适用于更普遍的共性功能性功能差异,如普通磁性软性软性软性软性软性软性软性模型。</s>