We introduce a deep learning model which can generically approximate regular conditional distributions (RCDs). The proposed model operates in three phases: first linearizes inputs from a given metric space $\mathcal{X}$ to $\mathbb{R}^d$ via a feature map then, these linearized features are processed by a deep feedforward neural network, and the network's outputs are then translated to the $1$-Wasserstein space $\mathcal{P}_1(\mathbb{R}^D)$ via a probabilistic extension of the attention mechanism introduced by Bahdanau et al. (2014). We find that the models built using our framework can approximate any continuous function from $\mathbb{R}^d$ to $\mathcal{P}_1(\mathbb{R}^D)$ uniformly on compact sets, quantitatively. We identify two ways of avoiding the curse of dimensionality when approximating $\mathcal{P}_1(\mathbb{R}^D)$-valued functions. The first strategy describes functions in $C(\mathbb{R}^d,\mathcal{P}_1(\mathbb{R}^D))$ which can be efficiently approximated on any compact subset of $\mathbb{R}^d$. The second approach describes compact subsets of $\mathbb{R}^d$, on which any most in $C(\mathbb{R}^d,\mathcal{P}_1(\mathbb{R}^D))$ can be efficiently approximated. The results are verified experimentally.
翻译:我们引入了一个深度学习模型, 可以一般地近似常规有条件分布(RCD) 。 提议的模型可以运行三个阶段 : 首先, 通过一个地貌地图将特定度空间的输入量 $\ mathcal{X} 美元线性化到 $\ mathbb{R ⁇ d$, 然后这些线性特征通过一个深厚的饲料向神经网络处理, 然后网络的输出被翻译为$- Wasserstein 空间$\ mathcal{P}1 (\ mathbb{R{R}}D} 。 通过Bahdanau 等人(2014) 引入的注意机制的概率扩展, 将给给给指定度空间输入 $\ math{X} 美元到$\ mathbr} 美元, 这些模型可以近似任何连续的函数, 从$\ mathbr\\\\ mahbr\ b) 。 我们确定两种方法, 当接近 mablex mab bromab 任何 mab bromab max 。