We show that every $d$-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a $1$-dimensional uniform input distribution. What is more, this is possible without incurring a cost - in terms of approximation error measured in Wasserstein-distance - relative to generating the $d$-dimensional target distribution from $d$ independent random variables. This is enabled by a vast generalization of the space-filling approach discovered in (Bailey & Telgarsky, 2018). The construction we propose elicits the importance of network depth in driving the Wasserstein distance between the target distribution and its neural network approximation to zero. Finally, we find that, for histogram target distributions, the number of bits needed to encode the corresponding generative network equals the fundamental limit for encoding probability distributions as dictated by quantization theory.
翻译:我们显示,每1美元维度支持的维度概率分布,都可以从1美元维度统一输入分布的深ReLU网络中产生。此外,如果在瓦森斯坦-远距离测量的近似误差方面不产生成本,则有可能从独立随机变量中产生美元维度目标分布。这是在(Bailey & Telgarsky, 2018年)中发现的空间填充方法的广泛概括化所促成的。我们提议的构造引出了将目标分布与神经网络近似为零之间的瓦瑟斯坦距离驱动的网络深度的重要性。最后,我们发现,对于直方图目标分布而言,编码相应的基因网络所需的位数相当于根据四分法理论设定的编码概率分布的基本限度。