Deep neural networks have seen tremendous success over the last years. Since the training is performed on digital hardware, in this paper, we analyze what actually can be computed on current hardware platforms modeled as Turing machines, which would lead to inherent restrictions of deep learning. For this, we focus on the class of inverse problems, which, in particular, encompasses any task to reconstruct data from measurements. We prove that finite-dimensional inverse problems are not Banach-Mazur computable for small relaxation parameters. In fact, our result even holds for Borel-Turing computability., i.e., there does not exist an algorithm which performs the training of a neural network on digital hardware for any given accuracy. Even more, our results introduce a lower bound on the accuracy that can be obtained algorithmically. This establishes a conceptual barrier on the capabilities of neural networks for finite-dimensional inverse problems given that the computations are performed on digital hardware.
翻译:深神经网络在过去几年里取得了巨大的成功。 由于培训是在数字硬件上进行的,我们在本文件中分析了目前以图灵机器为模型的硬件平台上实际可以计算到什么,这会导致对深层次学习的内在限制。为此,我们侧重于反向问题,特别是包含从测量中重建数据的任何任务。我们证明,从小的放松参数来看,有限维反向问题不是Banach-Mazur可计算的问题。事实上,我们的结果甚至还停留在波罗-图灵可调适性上。也就是说,没有一种算法来进行数字硬件神经网络培训,以达到任何特定的准确性。更进一步,我们的结果对可以用算法获得的准确性提出了更低的界限。这为以数字硬件进行计算,对线性网络的反向问题的能力设置了概念障碍。