We consider online model selection with decentralized data over $M$ clients, and study a fundamental problem: the necessity of collaboration. Previous work gave a negative answer from the perspective of worst-case regret minimization, while we give a different answer from the perspective of regret-computational cost trade-off. We separately propose a federated algorithm with and without communication constraint and prove regret bounds that show (i) collaboration is unnecessary if we do not limit the computational cost on each client; (ii) collaboration is necessary if we limit the computational cost on each client to $o(K)$, where $K$ is the number of candidate hypothesis spaces. As a by-product, we improve the regret bounds of algorithms for distributed online multi-kernel learning at a smaller computational and communication cost. Our algorithms rely on three new techniques, i.e., an improved Bernstein's inequality for martingale, a federated algorithmic framework, named FOMD-No-LU, and decoupling model selection and predictions, which might be of independent interest.
翻译:暂无翻译