Deep neural network is a state-of-art method in modern science and technology. Much statistical literature have been devoted to understanding its performance in nonparametric estimation, whereas the results are suboptimal due to a redundant logarithmic sacrifice. In this paper, we show that such log-factors are not necessary. We derive upper bounds for the $L^2$ minimax risk in nonparametric estimation. Sufficient conditions on network architectures are provided such that the upper bounds become optimal (without log-sacrifice). Our proof relies on an explicitly constructed network estimator based on tensor product B-splines. We also derive asymptotic distributions for the constructed network and a relating hypothesis testing procedure. The testing procedure is further proven as minimax optimal under suitable network architectures.
翻译:深神经网络是现代科学技术中最先进的方法。许多统计文献都致力于了解其非参数估计的性能,而由于重复的对数牺牲,其结果并不理想。在本文中,我们表明,这种日志因素是不必要的。我们在非参数估计中得出了2美元迷你负风险的上限。在网络结构中提供了足够条件,使上界成为最佳(没有日志牺牲)。我们的证据依赖于一个基于高压产品B-脉冲的、明确建造的网络估计仪。我们还为建造的网络和相关的假设测试程序采集了无源分布。在适当的网络结构下,测试程序被进一步证明为最优化的迷你体。