Conditional quantile estimation is a key statistical learning challenge motivated by the need to quantify uncertainty in predictions or to model a diverse population without being overly reductive. As such, many models have been developed for this problem. Adopting a meta viewpoint, we propose a general framework (inspired by neural network optimization) for aggregating any number of conditional quantile models in order to boost predictive accuracy. We consider weighted ensembling strategies of increasing flexibility where the weights may vary over individual models, quantile levels, and feature values. An appeal of our approach is its portability: we ensure that estimated quantiles at adjacent levels do not cross by applying simple transformations through which gradients can be backpropagated, and this allows us to leverage the modern deep learning toolkit for building quantile ensembles. Our experiments confirm that ensembling can lead to big gains in accuracy, even when the constituent models are themselves powerful and flexible.
翻译:有条件孔径估算是统计学习方面的一个关键挑战,其驱动因素是需要量化预测中的不确定性,或者在不过度消减的情况下模拟多样化的人口。因此,已经为这一问题开发了许多模型。采用元观点,我们提出一个总框架(在神经网络优化的激励下),用于汇总任何数量有条件孔径模型,以提高预测准确性。我们认为,加权组合战略具有增加灵活性,重量可能因单个模型、孔径水平和特征值而不同。我们方法的吸引力在于可转移性:我们确保相邻水平的估计孔径不通过应用简单的转换方法交叉,梯度可反向调整,这使我们能够利用现代深层学习工具来构建孔径组合。我们的实验证实,即使组成模型本身是强大和灵活的,组合可以带来巨大的准确性收益。