Deep neural networks have seen tremendous success over the last years. Since the training is performed on digital hardware, in this paper, we analyze what actually can be computed on current hardware platforms modeled as Turing machines, which would lead to inherent restrictions of deep learning. For this, we focus on the class of inverse problems, which, in particular, encompasses any task to reconstruct data from measurements. We prove that finite-dimensional inverse problems are not Banach-Mazur computable for small relaxation parameters. In fact, our result even holds for Borel-Turing computability., i.e., there does not exist an algorithm which performs the training of a neural network on digital hardware for any given accuracy. This establishes a conceptual barrier on the capabilities of neural networks for finite-dimensional inverse problems given that the computations are performed on digital hardware.
翻译:深海神经网络在过去几年中取得了巨大成功。 由于培训是在数字硬件上进行的,我们在本文件中分析了目前以图灵机器为模型的硬件平台上实际可以计算到什么,这会导致对深层学习的内在限制。 为此,我们侧重于反向问题类别,其中特别包括从测量中重建数据的任何任务。我们证明,从小的放松参数来看,有限维反的问题不是巴纳赫-马苏尔的可计算问题。事实上,我们的结果甚至维持在博雷尔-图灵的可调和性上。 也就是说,不存在一种算法,用来为任何特定准确性对数字硬件进行神经网络的训练。这在概念上构成了一个障碍,因为计算是在数字硬件上进行。