We explore the phase diagram of approximation rates for deep neural networks and prove several new theoretical results. In particular, we generalize the existing result on the existence of deep discontinuous phase in ReLU networks to functional classes of arbitrary positive smoothness, and identify the boundary between the feasible and infeasible rates. Moreover, we show that all networks with a piecewise polynomial activation function have the same phase diagram. Next, we demonstrate that standard fully-connected architectures with a fixed width independent of smoothness can adapt to smoothness and achieve almost optimal rates. Finally, we consider deep networks with periodic activations ("deep Fourier expansion") and prove that they have very fast, nearly exponential approximation rates, thanks to the emerging capability of the network to implement efficient lookup operations.
翻译:我们探索深神经网络近似速率的阶段图,并证明若干新的理论结果。特别是,我们把RELU网络存在深度不连续阶段的现有结果推广到任意顺畅的功能类别,并查明可行和不可行的率之间的界限。此外,我们显示,所有具有片断多球激活功能的网络都有相同的阶段图。接下来,我们证明标准、完全连接的、且不光滑的固定宽度结构能够适应平稳,并实现几乎最佳的速率。最后,我们考虑采用定期启动的深网络(“深四倍扩展 ” ), 并证明它们具有非常快速、近乎指数的近似近似速率, 这是因为网络正在具备实施高效搜索操作的能力。