Quantile regression and conditional density estimation can reveal structure that is missed by mean regression, such as multimodality and skewness. In this paper, we introduce a deep learning generative model for joint quantile estimation called Penalized Generative Quantile Regression (PGQR). Our approach simultaneously generates samples from many random quantile levels, allowing us to infer the conditional distribution of a response variable given a set of covariates. Our method employs a novel variability penalty to avoid the problem of vanishing variability, or memorization, in deep generative models. Further, we introduce a new family of partial monotonic neural networks (PMNN) to circumvent the problem of crossing quantile curves. A major benefit of PGQR is that it can be fit using a single optimization, thus bypassing the need to repeatedly train the model at multiple quantile levels or use computationally expensive cross-validation to tune the penalty parameter. We illustrate the efficacy of PGQR through extensive simulation studies and analysis of real datasets. Code to implement our method is available at https://github.com/shijiew97/PGQR.
翻译:量度回归和有条件密度估计可以揭示被中位回归忽略的结构,例如多式联运和斜度。 在本文中,我们引入了一种叫做 " 惩罚性生成量回归(PGQR) " (PGQR)的深度学习基因模型,用于联合量度估算。我们的方法同时从许多随机孔位水平上生成样本,使我们可以推断一个响应变量的有条件分布,给一组共变数。我们的方法使用一种新的变异性罚则,以避免在深层基因模型中消失变异性或记忆化的问题。此外,我们引入了一个部分单一神经网络(PMNNN)的新组合,以绕过跨孔曲线的问题。PGQR的一个主要好处是,它可以使用单一的优化,从而绕过反复在多位位位位水平上培训模型的需要,或者使用计算成本昂贵的交叉考量来调控参数。我们通过对真实数据集进行广泛的模拟研究和分析来说明PGQR的功效。我们实施方法的代码可在 https://gitubus/Q.comshi/ Qewji查阅。