This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable. Supporting experiments on synthetic and real benchmark data demonstrate that significant energy gains can be obtained with no performance loss.
翻译:本条提出基于受监督的主要成分分析(SPCA)的分布式多任务学习算法,即:(一) 对高斯混合物而言理论上最理想,(二) 计算成本低、可扩缩,关于合成和实际基准数据的辅助实验表明,在不造成性能损失的情况下,可以取得重大的能源收益。