We introduce conditional push-forward neural networks (CPFN), a generative framework for conditional distribution estimation. Instead of directly modeling the conditional density $f_{Y|X}$, CPFN learns a stochastic map $\varphi=\varphi(x,u)$ such that $\varphi(x,U)$ and $Y|X=x$ follow approximately the same law, with $U$ a suitable random vector of pre-defined latent variables. This enables efficient conditional sampling and straightforward estimation of conditional statistics through Monte Carlo methods. The model is trained via an objective function derived from a Kullback-Leibler formulation, without requiring invertibility or adversarial training. We establish a near-asymptotic consistency result and demonstrate experimentally that CPFN can achieve performance competitive with, or even superior to, state-of-the-art methods, including kernel estimators, tree-based algorithms, and popular deep learning techniques, all while remaining lightweight and easy to train.
翻译:本文提出条件前推神经网络(CPFN),一种用于条件分布估计的生成式框架。该方法不直接建模条件密度$f_{Y|X}$,而是学习一个随机映射$\varphi=\varphi(x,u)$,使得$\varphi(x,U)$与$Y|X=x$近似服从相同分布,其中$U$为预设的合适潜变量随机向量。该框架支持高效的条件采样,并可通过蒙特卡洛方法直接估计条件统计量。模型通过基于Kullback-Leibler散度推导的目标函数进行训练,无需可逆性约束或对抗训练。我们建立了近渐近一致性理论结果,并通过实验证明CPFN在保持模型轻量化与易训练性的同时,其性能可与核估计、基于树的算法及主流深度学习技术等前沿方法相竞争,甚至表现更优。