Given an imperfect predictor, we exploit additional features at test time to improve the predictions made, without retraining and without knowledge of the prediction function. This scenario arises if training labels or data are proprietary, restricted, or no longer available, or if training itself is prohibitively expensive. We assume that the additional features are useful if they exhibit strong statistical dependence to the underlying perfect predictor. Then, we empirically estimate and strengthen the statistical dependence between the initial noisy predictor and the additional features via manifold denoising. As an example, we show that this approach leads to improvement in real-world visual attribute ranking. Project webpage: http://www.jamestompkin.com/tupi
翻译:鉴于预测不完善,我们在测试时利用更多的特征来改进预测,不进行再培训,也不了解预测功能;如果培训标签或数据是专有的、受限制的或不再提供,或培训本身费用太高,或培训本身费用太高,则出现这种情况;我们认为,如果附加特征在统计上严重依赖基础的完美预测器,则这些额外特征是有用的。然后,我们从经验上估计并强化了最初的噪音预测器与通过多重分解而增加的特征之间的统计依赖性。举例来说,我们表明,这种方法可以改善真实世界的视觉属性排名。项目网页:http://www.jamestompkin.com/tupi。项目网页:http://www.jamestompkin.tupi/tupi。