Many meta-learning algorithms can be formulated into an interleaved process, in the sense that task-specific predictors are learned during inner-task adaptation and meta-parameters are updated during meta-update. The normal meta-training strategy needs to differentiate through the inner-task adaptation procedure to optimize the meta-parameters. This leads to a constraint that the inner-task algorithms should be solved analytically. Under this constraint, only simple algorithms with analytical solutions can be applied as the inner-task algorithms, limiting the model expressiveness. To lift the limitation, we propose an adaptation-agnostic meta-training strategy. Following our proposed strategy, we can apply stronger algorithms (e.g., an ensemble of different types of algorithms) as the inner-task algorithm to achieve superior performance comparing with popular baselines. The source code is available at https://github.com/jiaxinchen666/AdaptationAgnosticMetaLearning.
翻译:许多元学习算法可以形成一个相互脱节的过程,也就是说,在内部任务适应过程中学习了特定任务预测器,元参数在元更新过程中更新了。正常的元培训战略需要通过内部任务适应程序加以区分,以优化元参数。这导致一个制约因素,即内部任务算法应当通过分析解决。在这一制约因素下,只有具有分析解决办法的简单算法才能作为内任务算法应用,限制模型的表达性。为了取消限制,我们建议采用适应性-不可知的元培训战略。按照我们提出的战略,我们可以采用更强大的算法(例如,不同类型算法的共集)作为内任务算法,以便实现与流行基线相比较的优异性。源代码见https://github.com/jiaxinchen666/AdaptationAgnocitoMetalcain。