To address functional-output regression, we introduce projection learning (PL), a novel dictionary-based approach that learns to predict a function that is expanded on a dictionary while minimizing an empirical risk based on a functional loss. PL makes it possible to use non orthogonal dictionaries and can then be combined with dictionary learning; it is thus much more flexible than expansion-based approaches relying on vectorial losses. This general method is instantiated with reproducing kernel Hilbert spaces of vector-valued functions as kernel-based projection learning (KPL). For the functional square loss, two closed-form estimators are proposed, one for fully observed output functions and the other for partially observed ones. Both are backed theoretically by an excess risk analysis. Then, in the more general setting of integral losses based on differentiable ground losses, KPL is implemented using first-order optimization for both fully and partially observed output functions. Eventually, several robustness aspects of the proposed algorithms are highlighted on a toy dataset; and a study on two real datasets shows that they are competitive compared to other nonlinear approaches. Notably, using the square loss and a learnt dictionary, KPL enjoys a particularily attractive trade-off between computational cost and performances.
翻译:为解决功能输出回归问题,我们引入了预测学习(PL),这是一种基于字典的新颖方法,它学会预测在字典上扩展的功能,同时尽量减少基于功能损失的经验风险。PL使得有可能使用非正数词典,然后可以与字典学习相结合;因此,它比依赖矢量损失的扩展方法灵活得多。这种一般方法即刻产生以内核希尔伯特为核心值功能空间作为内核预测学习(KPL)。对于功能方块损失,提出了两个封闭式估计器,一个用于完全观察的产出功能,另一个用于部分观察的功能。两者在理论上都得到超额风险分析的支持。然后,在根据不同可测的地面损失确定整体损失的更一般情况下,KPL对全部和部分观察的输出功能都采用一级优化方法。最后,在微调数据集中突出了拟议算法中的若干稳健性方面;关于两个真实数据集的研究显示,它们与其他非线性方法相比具有竞争力。特别是,利用平方位计算法和具有吸引力的业绩。