Given $n$ i.i.d. samples drawn from an unknown distribution $P$, when is it possible to produce a larger set of $n+m$ samples which cannot be distinguished from $n+m$ i.i.d. samples drawn from $P$? (Axelrod et al. 2019) formalized this question as the sample amplification problem, and gave optimal amplification procedures for discrete distributions and Gaussian location models. However, these procedures and associated lower bounds are tailored to the specific distribution classes, and a general statistical understanding of sample amplification is still largely missing. In this work, we place the sample amplification problem on a firm statistical foundation by deriving generally applicable amplification procedures, lower bound techniques and connections to existing statistical notions. Our techniques apply to a large class of distributions including the exponential family, and establish a rigorous connection between sample amplification and distribution learning.
翻译:考虑到从不明的分发量中提取的一美元一美元一元的样本,当有可能产生更大的一套一美元+一美元样本,无法与一美元一美元一美元一美元一美元样品加以区分时,(Axelrod 等人,2019年)将这一问题正式确定为样本放大问题,并为离散分发和高斯定位模型提供了最佳的扩大程序,然而,这些程序和相关较低范围是针对特定分发类别量的,对样本放大的一般统计理解仍然基本缺乏。在这项工作中,我们通过制定普遍适用的推广程序、约束性较低的技术和与现有统计概念的联系,将样本放大问题置于坚实的统计基础之上,我们的技术适用于包括指数式组在内的大型分发类别,并在样本放大和分布学习之间建立严格的联系。