We analyze the meta-learning of the initialization and step-size of learning algorithms for piecewise-Lipschitz functions, a non-convex setting with applications to both machine learning and algorithms. Starting from recent regret bounds for the exponential forecaster on losses with dispersed discontinuities, we generalize them to be initialization-dependent and then use this result to propose a practical meta-learning procedure that learns both the initialization and the step-size of the algorithm from multiple online learning tasks. Asymptotically, we guarantee that the average regret across tasks scales with a natural notion of task-similarity that measures the amount of overlap between near-optimal regions of different tasks. Finally, we instantiate the method and its guarantee in two important settings: robust meta-learning and multi-task data-driven algorithm design.
翻译:我们分析了用于计件- 利普施茨函数的学习算法初始化和分级大小的元学习方法的元学习,这是一种与机器学习和算法两种应用相适应的非隐含式设置。从指数预测器最近对分散的不连续损失的遗憾界限开始,我们将其概括为依赖初始化,然后利用这一结果提出一个实用的元学习程序,既从多个在线学习任务中学习算法初始化和分级大小。我们保证任务尺度之间的平均遗憾,而任务尺度具有测量不同任务的近最佳区域重叠程度的自然任务相似性概念。最后,我们在两个重要环境下即强健的元学习和多任务数据驱动算法设计,即对这种方法及其保障进行即时化。