We study the efficacy and efficiency of deep generative networks for approximating probability distributions. We prove that neural networks can transform a one-dimensional source distribution to a distribution that is arbitrarily close to a high-dimensional target distribution in Wasserstein distances. Upper bounds of the approximation error are obtained in terms of neural networks' width and depth. It is shown that the approximation error grows at most linearly on the ambient dimension and that the approximation order only depends on the intrinsic dimension of the target distribution. On the contrary, when $f$-divergences are used as metrics of distributions, the approximation property is different. We prove that in order to approximate the target distribution in $f$-divergences, the dimension of the source distribution cannot be smaller than the intrinsic dimension of the target distribution. Therefore, $f$-divergences are less adequate than Waserstein distances as metrics of distributions for generating samples.
翻译:我们研究了近似概率分布的深基因化网络的功效和效率。 我们证明神经网络可以将单维源分布转换为任意接近瓦塞斯坦距离高维目标分布的分布。 近似误差的上界以神经网络的宽度和深度获得。 显示近似误差在环境维度上最多是线性地增长, 近似误差只取决于目标分布的内在维度。 相反, 当以美元- 维度作为分布的量度时, 近似属性是不同的。 我们证明, 为了以美元- 维度大致接近目标分布, 源分布的维度不能小于目标分布的内在维度。 因此, 美元- 维度比瓦瑟斯坦距离作为样本分布的量值要低。