Multi-task learning is a powerful method for solving several tasks jointly by learning robust representation. Optimization of the multi-task learning model is a more complex task than a single-task due to task conflict. Based on theoretical results, convergence to the optimal point is guaranteed when step size is chosen through line search. But, usually, line search for the step size is not the best choice due to the large computational time overhead. We propose a novel idea for line search algorithms in multi-task learning. The idea is to use latent representation space instead of parameter space for finding step size. We examined this idea with backtracking line search. We compare this fast backtracking algorithm with classical backtracking and gradient methods with a constant learning rate on MNIST, CIFAR-10, Cityscapes tasks. The systematic empirical study showed that the proposed method leads to more accurate and fast solution, than the traditional backtracking approach and keep competitive computational time and performance compared to the constant learning rate method.
翻译:多任务学习是通过学习强健代表来共同解决若干任务的有力方法。 多任务学习模式的优化比任务冲突造成的单任务学习模式更复杂。 根据理论结果,通过行搜索选择职级大小时,会保证达到最佳点。但是,通常,由于计算时间管理量大,行的尺寸搜索并不是最佳选择。我们为多任务学习中的行搜索算法提出了一个新想法。我们的想法是使用潜在代表空间而不是参数空间来寻找职级大小。我们用后行搜索来检查这一想法。我们用后行搜索来比较这种快速回溯跟踪算法和经典回溯跟踪法和梯度法,同时在MNIST、CIFAR-10、城市景任务上采用恒定的学习率。系统的实证研究表明,拟议的方法比传统的回溯跟踪方法更准确、更快地找到解决办法,并保持与固定学习率方法相比的竞争性计算时间和业绩。