The regularity of images generated by convolutional neural networks, such as the U-net, generative adversarial networks, or the deep image prior, is analyzed. In a resolution-independent, infinite dimensional setting, it is shown that such images, represented as functions, are always continuous and, in some circumstances, even continuously differentiable, contradicting the widely accepted modeling of sharp edges in images via jump discontinuities. While such statements require an infinite dimensional setting, the connection to (discretized) neural networks used in practice is made by considering the limit as the resolution approaches infinity. As practical consequence, the results of this paper suggest to refrain from basic L2 regularization of network weights in case of images being the network output.
翻译:分析由U-net、基因对抗网络或之前的深图像等进化神经网络产生的图像的规律性。在分辨率独立、无限的维度设置中,显示这些图像作为函数,始终是连续的,有时甚至持续不同,这与通过跳动不连续而广泛接受的图像尖锐边缘模型相矛盾。虽然这些声明需要一个无限的维度设置,但与实际使用的(分解的)神经网络的连接是通过将极限作为分辨率接近无限性来进行的。实际上,本文件的结果表明,如果图像是网络输出,则不对网络重量进行基本的L2规范。