Density regression characterizes the conditional density of the response variable given the covariates, and provides much more information than the commonly used conditional mean or quantile regression. However, it is often computationally prohibitive in applications with massive data sets, especially when there are multiple covariates. In this paper, we develop a new data reduction approach for the density regression problem using conditional support points. After obtaining the representative data, we exploit the penalized likelihood method as the downstream estimation strategy. Based on the connections among the continuous ranked probability score, the energy distance, the $L_2$ discrepancy and the symmetrized Kullback-Leibler distance, we investigate the distributional convergence of the representative points and establish the rate of convergence of the density regression estimator. The usefulness of the methodology is illustrated by modeling the conditional distribution of power output given multivariate environmental factors using a large scale wind turbine data set. Supplementary materials for this article are available online.
翻译:根据共变数,密度回归是响应变量的有条件密度特征,并比通常使用的有条件平均值或四分位回归法提供更多的信息。然而,在使用大量数据集的应用中,特别是在存在多个共变数的情况下,它往往在计算上令人望而却步。在本文件中,我们利用有条件的支持点,为密度回归问题开发了新的数据减少方法。在获得代表性数据后,我们利用受罚可能性方法作为下游估算战略。根据连续排序概率分、能量距离、$L_2美元差异和对称的 Kullback-Leiber 距离之间的连接,我们调查代表点的分布趋同,并确定密度回归估计值的趋同率。该方法的有用性体现在利用大型风轮机数据集,对基于多变环境因素的电源输出进行建模。此文章的补充材料可在线查阅。