Multi-task learning leverages structural similarities between multiple tasks to learn despite very few samples. Motivated by the recent success of neural networks applied to data-scarce tasks, we consider a linear low-dimensional shared representation model. Despite an extensive literature, existing theoretical results either guarantee weak estimation rates or require a large number of samples per task. This work provides the first estimation error bound for the trace norm regularized estimator when the number of samples per task is small. The advantages of trace norm regularization for learning data-scarce tasks extend to meta-learning and are confirmed empirically on synthetic datasets.
翻译:多任务学习利用了多种任务之间的结构相似性来学习,尽管样本很少。受最近神经网络成功应用到数据分离任务的影响,我们考虑的是线性低维共享代表模式。尽管有大量文献,但现有的理论结果要么保证了估算率低,要么要求每个任务需要大量样本。这项工作提供了第一个估计错误,在每次任务样本数量小时,用于追踪规范的正常估计值。学习数据分离任务的跟踪规范规范规范规范正规化的好处延伸到了元数据学习,并在合成数据集上得到了经验上的确认。