We study the properties of differentiable neural networks activated by rectified power unit (RePU) functions. We show that the partial derivatives of RePU neural networks can be represented by RePUs mixed-activated networks and derive upper bounds for the complexity of the function class of derivatives of RePUs networks. We establish error bounds for simultaneously approximating $C^s$ smooth functions and their derivatives using RePU-activated deep neural networks. Furthermore, we derive improved approximation error bounds when data has an approximate low-dimensional support, demonstrating the ability of RePU networks to mitigate the curse of dimensionality. To illustrate the usefulness of our results, we consider a deep score matching estimator (DSME) and propose a penalized deep isotonic regression (PDIR) using RePU networks. We establish non-asymptotic excess risk bounds for DSME and PDIR under the assumption that the target functions belong to a class of $C^s$ smooth functions. We also show that PDIR has a robustness property in the sense it is consistent with vanishing penalty parameters even when the monotonicity assumption is not satisfied. Furthermore, if the data distribution is supported on an approximate low-dimensional manifold, we show that DSME and PDIR can mitigate the curse of dimensionality.
翻译:我们研究了由矩形功率单元(RePU)函数激活的可微神经网络的特性。我们展示了RePU神经网络的偏导数可以通过混合激活网络的RePU来表示,并推导出偏导数的函数类的复杂度的上界。我们建立了同时逼近C^s光滑函数及其导数的RePU激活深度神经网络的误差界。此外,当数据具有近似的低维支撑时,我们推导出了改进的逼近误差界,展示了RePU网络缓解维数诅咒的能力。为了说明我们结果的有用性,我们考虑了一个深度分数匹配估计器(DSME),并提出了一种使用RePU网络的惩罚深度保序回归(PDIR)。我们在对目标函数属于C^s光滑函数的一类的假设下,建立了DSME和PDIR的非渐近超额风险界。我们还展示了,即使当不满足单调增的假设时,PDIR在惩罚参数趋于零时仍具有鲁棒性。此外,如果数据分布在一个近似低维流形上,我们展示了DSME和PDIR可以缓解维数诅咒。