We study the problem of multi-task non-smooth optimization that arises ubiquitously in statistical learning, decision-making and risk management. We develop a data fusion approach that adaptively leverages commonalities among a large number of objectives to improve sample efficiency while tackling their unknown heterogeneities. We provide sharp statistical guarantees for our approach. Numerical experiments on both synthetic and real data demonstrate significant advantages of our approach over benchmarks.
翻译:我们研究在统计学习、决策和风险管理方面普遍存在的多任务、非抽吸优化问题;我们开发一种数据集成方法,在适应性上利用大量目标的共同点,提高抽样效率,同时解决其未知差异;我们为我们的方法提供尖锐的统计保证;合成数据和真实数据的数值实验表明,我们的方法比基准有显著的优势。