The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational problem. We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation. This SDP can be viewed as an approximation of the Wasserstein gradient in a broader function family including two-layer networks. By solving the convex SDP, we obtain the optimal approximation of the Wasserstein gradient direction in this class of functions. Numerical experiments including PDE-constrained Bayesian inference and parameter estimation in COVID-19 modeling demonstrate the effectiveness of the proposed method.
翻译:瓦西斯坦梯度方向的计算对于后方取样问题和科学计算至关重要。 瓦西斯坦梯度与有限样本的近似要求解决一个变异性问题。 我们研究了带有平方-RELU激活的双层网络组的变异性问题,我们从中获得了半限定程序(SDP)放松。 这个SDP可以被视为包括双层网络在内的更广泛功能大家庭中瓦西斯坦梯度的近似性。 通过解决二次曲线SDP,我们获得了这一功能类别中瓦西斯坦梯度方向的最佳近似性。 数字实验,包括受PDE约束的巴耶西亚人推论和COVID-19模型参数估计,证明了拟议方法的有效性。