Model-Agnostic Meta-Learning (MAML) is one of the most successful meta-learning techniques for few-shot learning. It uses gradient descent to learn commonalities between various tasks, enabling the model to learn the meta-initialization of its own parameters to quickly adapt to new tasks using a small amount of labeled training data. A key challenge to few-shot learning is task uncertainty. Although a strong prior can be obtained from meta-learning with a large number of tasks, a precision model of the new task cannot be guaranteed because the volume of the training dataset is normally too small. In this study, first,in the process of choosing initialization parameters, the new method is proposed for task-specific learner adaptively learn to select initialization parameters that minimize the loss of new tasks. Then, we propose two improved methods for the meta-loss part: Method 1 generates weights by comparing meta-loss differences to improve the accuracy when there are few classes, and Method 2 introduces the homoscedastic uncertainty of each task to weigh multiple losses based on the original gradient descent,as a way to enhance the generalization ability to novel classes while ensuring accuracy improvement. Compared with previous gradient-based meta-learning methods, our model achieves better performance in regression tasks and few-shot classification and improves the robustness of the model to the learning rate and query sets in the meta-test set.
翻译:模型- 不可知元学习( MAML) 是最成功的元学习技术之一, 可用于少见学习。 它使用梯度下降来学习不同任务之间的共性, 使模型能够学习自身参数的元初始化, 使用少量标签化培训数据快速适应新任务。 任务不确定性是少许学习的关键挑战。 虽然从大量任务的元学习中可以获得很强的先验能力, 但是由于培训数据集的数量通常太小, 无法保证新任务的精确模型。 在这项研究中, 首先, 在选择初始化参数的过程中, 为特定任务学习者提出了新方法, 以适应性地学习选择初始化参数, 以尽可能减少新任务的损失。 然后, 我们对元损失部分提出了两种改进的方法: 方法1 通过比较元亏损差异来提高准确性, 在没有几个类别的情况下提高准确性, 方法2 介绍了每项任务具有同性不确定性, 以根据原始梯度下降来计算多重损失, 以便提高普通化模型到新课程的能力, 同时确保精确性地改进我们以往的恢复率的方法。