We investigate different methods for regularizing quantile regression when predicting either a subset of quantiles or the full inverse CDF. We show that minimizing an expected pinball loss over a continuous distribution of quantiles is a good regularizer even when only predicting a specific quantile. For predicting multiple quantiles, we propose achieving the classic goal of non-crossing quantiles by using deep lattice networks that treat the quantile as a monotonic input feature, and we discuss why monotonicity on other features is an apt regularizer for quantile regression. We show that lattice models enable regularizing the predicted distribution to a location-scale family. Lastly, we propose applying rate constraints to improve the calibration of the quantile predictions on specific subsets of interest and improve fairness metrics. We demonstrate our contributions on simulations, benchmark datasets, and real quantile regression problems.
翻译:我们在预测一个小孔数子子集或完整反向 CDF 时,调查了使四分位回归正规化的不同方法。 我们显示,在连续分配四分位数时,尽可能减少预期的弹丸损失,即使只是预测一个特定的四分位数,也是一个很好的常规化。 在预测多个四分位数时,我们建议通过使用将四分位数作为单体输入特征的深层拉蒂网实现非四分位数的经典目标,我们讨论为什么其他特性的单调性是适合四分位回归的正态。 我们显示, 方位模型能够将预测的弹丸损失常规化到一个位置大小的大家庭。 最后, 我们提议应用利率限制来改进特定利益子群的四分位数预测的校准, 并改进公平度度。 我们在模拟、 基准数据集 和真实的四分位回归问题上展示了我们的贡献 。