The multi-task learning (MTL) paradigm can be traced back to an early paper of Caruana (1997) in which it was argued that data from multiple tasks can be used with the aim to obtain a better performance over learning each task independently. A solution of MTL with conflicting objectives requires modelling the trade-off among them which is generally beyond what a straight linear combination can achieve. A theoretically principled and computationally effective strategy is finding solutions which are not dominated by others as it is addressed in the Pareto analysis. Multi-objective optimization problems arising in the multi-task learning context have specific features and require adhoc methods. The analysis of these features and the proposal of a new computational approach represent the focus of this work. Multi-objective evolutionary algorithms (MOEAs) can easily include the concept of dominance and therefore the Pareto analysis. The major drawback of MOEAs is a low sample efficiency with respect to function evaluations. The key reason for this drawback is that most of the evolutionary approaches do not use models for approximating the objective function. Bayesian Optimization takes a radically different approach based on a surrogate model, such as a Gaussian Process. In this thesis the solutions in the Input Space are represented as probability distributions encapsulating the knowledge contained in the function evaluations. In this space of probability distributions, endowed with the metric given by the Wasserstein distance, a new algorithm MOEA/WST can be designed in which the model is not directly on the objective function but in an intermediate Information Space where the objects from the input space are mapped into histograms. Computational results show that the sample efficiency and the quality of the Pareto set provided by MOEA/WST are significantly better than in the standard MOEA.
翻译:多任务学习模式(MTL)可以追溯到Caruana (1997年)的早期论文,其中指出,可以从多重任务中获取数据,目的是在独立地学习每项任务后取得更好的业绩。MTL的解决方案要求对目标相冲突者之间的取舍进行模拟,而这种取舍一般超出了直线组合所能达到的程度。一个理论原则和计算上有效的战略是找到并非像Pareto分析那样由他人主导的解决方案。多任务学习对象中出现的多目标优化问题具有具体特点,需要采用自动方法。这些特点和新计算方法的建议可以用来在独立地学习每项任务后取得更好的业绩。这些特点和新计算方法的分析和提议体现了这项工作的重点。多目标进化算法(MOEAs)很容易包括支配地位的概念,因此是Pareto分析。MOEAs的主要回溯源是功能评估的低样本效率。这种模式多数进化方法并不使用模型来适应目标函数。Bayesian Opitimision 采用一种完全不同的方法,而这种方法的基础是Speriquestal AS imal imal livalal livalalal exal livestistrations as the the the ligal listal listal listal listal listal listal lidududududing lictions