We present an explicit deep neural network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional Lipschitz-continuous target distribution. The key ingredient of our design is a generalization of the "space-filling" property of sawtooth functions discovered in (Bailey & Telgarsky, 2018). We elicit the importance of depth - in our neural network construction - in driving the Wasserstein distance between the target distribution and the approximation realized by the network to zero. An extension to output distributions of arbitrary dimension is outlined. Finally, we show that the proposed construction does not incur a cost - in terms of error measured in Wasserstein-distance - relative to generating $d$-dimensional target distributions from $d$ independent random variables.
翻译:我们提出了明确的深层神经网络建设,将分布一致的单维噪音转化为任意近距离接近任何二维Lipschitz连续目标分布。我们设计的关键组成部分是,对在(Bailey & Telgarsky, 2018年)中发现的锯齿功能的“空间填充”属性进行概括化。我们发现,在我们神经网络建设中,深度对于将目标分布与网络实现的近距离之间的瓦瑟斯坦距离推向零非常重要。我们概述了任意尺寸的产出分布的延伸。最后,我们表明,就瓦列斯泰因距离测量的误差而言,拟议构建的成本并不比从独立的随机变量中产生以美元为单位的立方目标分布要高。