In this paper, we study the properties of nonparametric least squares regression using deep neural networks. We derive non-asymptotic upper bounds for the prediction error of the empirical risk minimizer for feedforward deep neural regression. Our error bounds achieve minimax optimal rate and significantly improve over the existing ones in the sense that they depend polynomially on the dimension of the predictor, instead of exponentially on dimension. We show that the neural regression estimator can circumvent the curse of dimensionality under the assumption that the predictor is supported on an approximate low-dimensional manifold or a set with low Minkowski dimension. These assumptions differ from the structural condition imposed on the target regression function and are weaker and more realistic than the exact low-dimensional manifold support assumption. We investigate how the prediction error of the neural regression estimator depends on the structure of neural networks and propose a notion of network relative efficiency between two types of neural networks, which provides a quantitative measure for evaluating the relative merits of different network structures. To establish these results, we derive a novel approximation error bound for the H\"older smooth functions with a positive smoothness index using ReLU activated neural networks, which may be of independent interest. Our results are derived under weaker assumptions on the data distribution and the neural network structure than those in the existing literature.
翻译:在本文中,我们用深神经网络来研究非对称最小平方回归的特性。 我们为预测实验风险最小化的预测误差而得出非不设防的上界线, 用于预测实验风险最小化的预测误差, 供进深神经回归。 我们的误界达到最小最大最佳速率, 并大大改进于现有误差, 即它们依赖预测器的多维度, 而不是在维度上成指数。 我们显示神经回归估计仪可以绕过维度的诅咒, 假设预测器在大约低维度的多维度或低Minkowski维度的数据集上方。 这些假设与目标回归功能所设定的结构条件不同, 并且比精确的低维度多维支持假设更弱和现实。 我们调查神经回归估计仪的预测误差如何取决于神经网络的结构, 并提出了两种神经网络之间的网络相对效率概念, 为评估不同网络结构的相对优点提供了定量衡量尺度。 为了确定这些结果, 我们为目标回归网络所设定的新近误差的文献, 并且以较平滑的当前数据分布式的网络中, 我们的光度假设中较平滑度的网络的动态数据结构中, 可能是以平滑度的线状的线状的模型。