In this paper, we introduce a discrete variant of the meta-learning framework. Meta-learning aims at exploiting prior experience and data to improve performance on future tasks. By now, there exist numerous formulations for meta-learning in the continuous domain. Notably, the Model-Agnostic Meta-Learning (MAML) formulation views each task as a continuous optimization problem and based on prior data learns a suitable initialization that can be adapted to new, unseen tasks after a few simple gradient updates. Motivated by this terminology, we propose a novel meta-learning framework in the discrete domain where each task is equivalent to maximizing a set function under a cardinality constraint. Our approach aims at using prior data, i.e., previously visited tasks, to train a proper initial solution set that can be quickly adapted to a new task at a relatively low computational cost. This approach leads to (i) a personalized solution for each individual task, and (ii) significantly reduced computational cost at test time compared to the case where the solution is fully optimized once the new task is revealed. The training procedure is performed by solving a challenging discrete optimization problem for which we present deterministic and randomized algorithms. In the case where the tasks are monotone and submodular, we show strong theoretical guarantees for our proposed methods even though the training objective may not be submodular. We also demonstrate the effectiveness of our framework on two real-world problem instances where we observe that our methods lead to a significant reduction in computational complexity in solving the new tasks while incurring a small performance loss compared to when the tasks are fully optimized.
翻译:在本文中,我们引入了元学习框架的离散变体。 元学习的目的是利用先前的经验和数据来改进未来任务的业绩。 现在,在连续领域有许多元学习的配方。 值得注意的是,模型- 不可知的元学习(MAML)的配方将每个任务视为一个连续的优化问题,并且基于先前的数据,可以学习一种适合的初始化方法,在几个简单的梯度更新之后,可以适应新的、看不见的任务。 受此术语的驱动, 我们提议了一个全新的离散域的元学习框架,其中每个任务都相当于在基本限制下最大限度地发挥一套设定的功能。 我们的方法是利用以前的数据,即以前访问过的任务,来训练一套适当的初步解决办法,在计算成本相对较低的情况下,可以迅速适应新的任务。 这种方法导致:(一) 每项任务的个人化解决方案,在几个简单的梯度更新后,我们测试时的计算成本会大大降低。 在新的任务被披露后,我们的培训程序是通过一个具有挑战性的离心调整方法来完成一个具有挑战性的离心优化的任务,尽管我们目前所选择的离心机优化的任务,但我们所选择的轨道上,我们所提议的最佳计算方法也可能显示的是我们所选择的轨道上的一个小的公式, 。