Approximation theorists have established best-in-class optimal approximation rates of deep neural networks by utilizing their ability to simultaneously emulate partitions of unity and monomials. Motivated by this, we propose partition of unity networks (POUnets) which incorporate these elements directly into the architecture. Classification architectures of the type used to learn probability measures are used to build a meshfree partition of space, while polynomial spaces with learnable coefficients are associated to each partition. The resulting hp-element-like approximation allows use of a fast least-squares optimizer, and the resulting architecture size need not scale exponentially with spatial dimension, breaking the curse of dimensionality. An abstract approximation result establishes desirable properties to guide network design. Numerical results for two choices of architecture demonstrate that POUnets yield hp-convergence for smooth functions and consistently outperform MLPs for piecewise polynomial functions with large numbers of discontinuities.
翻译:近似理论理论家通过利用它们同时模仿统一和单体的分隔线的能力,确立了深神经网络的最佳最佳近似速率。 我们为此提出将统一网络(POUNIts)分割开来,将这些元素直接纳入结构中。 用于学习概率测量的分类结构用于构建无网格的空间分隔线,而具有可学习系数的多元空间则与每个分区相关。 由此产生的 Hp- 元素相似的近似允许使用快速最小方形优化器, 由此产生的结构大小不需要与空间尺寸成倍地扩大, 打破了维度的诅咒。 抽象的近似结果确立了指导网络设计的理想属性。 两种结构选择的数值结果显示, POUnets 产生光滑功能的hp- convergence, 并持续超过具有大量不连续性的片断的多面函数的成形 MLPs。