We study the multi-task learning problem that aims to simultaneously analyze multiple datasets collected from different sources and learn one model for each of them. We propose a family of adaptive methods that automatically utilize possible similarities among those tasks while carefully handling their differences. We derive sharp statistical guarantees for the methods and prove their robustness against outlier tasks. Numerical experiments on synthetic and real datasets demonstrate the efficacy of our new methods.
翻译:我们研究多任务学习问题,目的是同时分析从不同来源收集的多个数据集,并为每个数据集学习一个模型。我们建议一套适应方法,在认真处理这些任务的差异的同时,自动利用这些任务之间可能存在的相似之处。我们为这些方法获得清晰的统计保证,并证明这些方法与外部任务相对应。合成和真实数据集的数值实验显示了我们新方法的功效。