In this work, we address conditional generation using deep invertible neural networks. This is a type of problem where one aims to infer the most probable inputs $X$ given outcomes $Y$. We call our method \textit{invertible graph neural network} (iGNN) due to the primary focus on generating node features on graph data. A notable feature of our proposed methods is that during network training, we revise the typically-used loss objective in normalizing flow and consider Wasserstein-2 regularization to facilitate the training process. Algorithmic-wise, we adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once through a single model. Theoretically, we characterize the conditions for identifiability of a true mapping, the existence and invertibility of the mapping, and the expressiveness of iGNN in learning the mapping. Experimentally, we verify the performance of iGNN on both simulated and real-data datasets. We demonstrate through extensive numerical experiments that iGNN shows clear improvement over competing conditional generation benchmarks on high-dimensional and/or non-convex data.
翻译:在这项工作中,我们利用深不可视的神经网络来解决有条件的生成问题。这是一个问题,我们的目标是推断最有可能的投入美元,而给付的结果是美元。我们称我们的方法为“textit{不可逆的图形神经网络”(iGNN),因为主要重点是在图形数据中生成节点特征。我们建议的方法的一个显著特点是,在网络培训期间,我们修改通常使用的损失目标,使流动正常化,并考虑将Wasserstein-2正规化,以促进培训进程。在算法方面,我们采用端到端的培训方法,因为我们的目标是通过单一模型同时处理前向和后向过程的预测和生成。理论上,我们确定真实绘图的可识别性条件、绘图的存在和不可忽略性,以及iGNN在学习绘图方面的清晰度。实验中,我们核查iGNN在模拟和真实数据集方面的性能。我们通过广泛的数字实验表明,iGNN显示,在高维和/或非同步数据的相竞的有条件生成基准方面,iGNNN显示有明显的改进。