When the unknown regression function of a single variable is known to have derivatives up to the $(\gamma+1)$th order bounded in absolute values by a common constant everywhere or a.e., the classical minimax optimal rate of the mean integrated squared error (MISE) $\left(\frac{1}{n}\right)^{\frac{2\gamma+2}{2\gamma+3}}$ leads one to conclude that, as $\gamma$ gets larger, the rate gets closer to $\frac{1}{n}$. This paper shows that: (i) if $n\leq\left(\gamma+1\right)^{2\gamma+3}$, the minimax optimal MISE rate is roughly $\frac{\log n}{n}$ and the optimal degree of smoothness to exploit is roughly $\left\lceil \frac{\log n}{2}\right\rceil -2$; (ii) if $n>\left(\gamma+1\right)^{2\gamma+3}$, the minimax optimal MISE rate is $\left(\frac{1}{n}\right)^{\frac{2\gamma+2}{2\gamma+3}}$ and the optimal degree of smoothness to exploit is $\gamma+1$. The building blocks of our minimax optimality results are a set of metric entropy bounds we develop in this paper for smooth function classes. Some of our bounds are original, and some of them improve and/or generalize the ones in the literature. Our metric entropy bounds allow us to explore the minimax optimal MISE rates associated with some commonly seen smoothness classes and also several non-standard smoothness classes, and can also be of independent interest even if one does not care about the nonparametric regressions.
翻译:当已知一个变量的未知回归函数为 $\\ gamma+1+$, 其衍生值接近于$( gamma+1+1) 美元。 本文显示:(一) 如果 $\leq\ left (gamma+1\right)\\\\\\n\r\right) 平均合并正方差的经典迷你最大最佳速率 (MISE) $left(\ frac{1\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\c\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\c\\\\\\\\\\\\c\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\