Penalized quantile regression (QR) is widely used for studying the relationship between a response variable and a set of predictors under data heterogeneity in high-dimensional settings. Compared to penalized least squares, scalable algorithms for fitting penalized QR are lacking due to the non-differentiable piecewise linear loss function. To overcome the lack of smoothness, a recently proposed convolution-type smoothed method brings an interesting tradeoff between statistical accuracy and computational efficiency for both standard and penalized quantile regressions. In this paper, we propose a unified algorithm for fitting penalized convolution smoothed quantile regression with various commonly used convex penalties, accompanied by an R-language package conquer available from the Comprehensive R Archive Network. We perform extensive numerical studies to demonstrate the superior performance of the proposed algorithm over existing methods in both statistical and computational aspects. We further exemplify the proposed algorithm by fitting a fused lasso additive QR model on the world happiness data.
翻译:惩罚性微量回归(QR) 被广泛用于研究反应变量与高维环境中数据异质下一组预测器之间的关系。 与受处罚的最小平方相比,由于无法区分的片段线性损失功能,没有可调整的QR的可缩放算法。 为了克服不平滑问题,最近提出的一种卷动型平滑方法在标准回归和受处罚的夸大回归的统计精确度和计算效率之间带来了有趣的权衡。 在本文中,我们提出了一种统一算法,用各种常用的孔式罚款来适应受处罚的平滑微量回归,同时配以来自综合档案网络的R语包征服。我们进行了广泛的数字研究,以显示拟议的算法在统计和计算方面优于现有方法的优异性表现。 我们通过将一个集成的 lasso 添加式 QR 模型安装到世界幸福数据上,进一步展示了拟议的算法。