Partition of unity networks (POU-Nets) have been shown capable of realizing algebraic convergence rates for regression and solution of PDEs, but require empirical tuning of training parameters. We enrich POU-Nets with a Gaussian noise model to obtain a probabilistic generalization amenable to gradient-based minimization of a maximum likelihood loss. The resulting architecture provides spatial representations of both noiseless and noisy data as Gaussian mixtures with closed form expressions for variance which provides an estimator of local error. The training process yields remarkably sharp partitions of input space based upon correlation of function values. This classification of training points is amenable to a hierarchical refinement strategy that significantly improves the localization of the regression, allowing for higher-order polynomial approximation to be utilized. The framework scales more favorably to large data sets as compared to Gaussian process regression and allows for spatially varying uncertainty, leveraging the expressive power of deep neural networks while bypassing expensive training associated with other probabilistic deep learning methods. Compared to standard deep neural networks, the framework demonstrates hp-convergence without the use of regularizers to tune the localization of partitions. We provide benchmarks quantifying performance in high/low-dimensions, demonstrating that convergence rates depend only on the latent dimension of data within high-dimensional space. Finally, we introduce a new open-source data set of PDE-based simulations of a semiconductor device and perform unsupervised extraction of a physically interpretable reduced-order basis.
翻译:显示团结网络(POU-Nets)的分区(POU-Nets)能够达到回归率和PDEs解决方案的代数趋同率,但需要对培训参数进行经验性调整。我们用高斯噪音模型来丰富POU-Nets,以获得一种有利于梯度最小化最大可能性损失的概率性一般化模型。由此形成的架构提供了无噪音和噪音数据的空间表达方式,作为高斯混合物和封闭形式表达式的差异表达方式,提供了局部误差的估测器。培训过程根据功能值的对应关系,产生了显著的输入空间的尖锐分割。这种对培训点的分类符合等级调整战略,大大改进了回归的本地化,从而允许使用更高级的多级多级近似度模型,比高级神经网络的显性能量,同时绕过与其它易变深的深层次学习方法相关的昂贵的培训。 与标准的深度内深层神经网络相比,这一框架展示了高级精度精确度精确度精确度的精确度调整战略,在不使用高级多级多级的深度数据缩缩缩缩度上展示了高级数据。