We propose an efficient distributed online learning protocol for low-latency real-time services. It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion. While such learners often achieve higher predictive performance than their linear counterparts, communicating the support vector expansions becomes inefficient for large numbers of support vectors. The proposed extension allows for a larger class of online learning algorithms---including those alleviating the problem above through model compression. In addition, we characterize the quality of the proposed protocol by introducing a novel criterion that requires the communication to be bounded by the loss suffered.
翻译:我们建议为低延迟实时服务提供高效的在线分布式在线学习协议。它向通过支持矢量扩展代表其模型的内脏在线学习者推广了先前提出的协议。虽然这些学习者通常比线性对应者取得更高的预测性能,但对于大量支持矢量的矢量而言,传播支持矢量扩张变得效率低下。提议的扩展允许使用更大规模的在线学习算法 — — 包括通过模型压缩来缓解上述问题。此外,我们通过引入要求通信受损失约束的新标准来描述拟议协议的质量。