We constructively show, via rigorous mathematical arguments, that GNN architectures outperform those of NN in approximating bandlimited functions on compact $d$-dimensional Euclidean grids. We show that the former only needs $\mathcal{M}$ sampled functional values to achieve a uniform approximation error of $O_{d}(\exp(-c\mathcal{M}^{1/d}))$ and that this error rate is optimal, in the sense that, NNs might achieve worse. On the theoretical side, our work demonstrates that ideas from sampling theory can be effectively used in analyzing the expressive capability of neural networks.
翻译:我们通过严格的数学论据建设性地表明,GNN的架构比NN的架构更符合NN的功能,它几乎是低频段,功能范围在Contactive $d$d-dify Euclidean 电网上。我们显示,前者只需要抽样的功能值$mathcal{M}$,才能得出统一的近似误差$O ⁇ d}(\\ exp(-c\mathcal{M ⁇ 1/d})$美元,而这一误差率是最佳的,也就是说,NNN可能更差。 在理论方面,我们的工作表明,抽样理论的观点可以有效地用于分析神经网络的直观能力。