This paper proposes a meta-learning approach to evolving a parametrized loss function, which is called Meta-Loss Network (MLN), for training the image classification learning on small datasets. In our approach, the MLN is embedded in the framework of classification learning as a differentiable objective function. The MLN is evolved with the Evolutionary Strategy algorithm (ES) to an optimized loss function, such that a classifier, which optimized to minimize this loss, will achieve a good generalization effect. A classifier learns on a small training dataset to minimize MLN with Stochastic Gradient Descent (SGD), and then the MLN is evolved with the precision of the small-dataset-updated classifier on a large validation dataset. In order to evaluate our approach, the MLN is trained with a large number of small sample learning tasks sampled from FashionMNIST and tested on validation tasks sampled from FashionMNIST and CIFAR10. Experiment results demonstrate that the MLN effectively improved generalization compared to classical cross-entropy error and mean squared error.
翻译:本文提出一种元学习方法,以发展一种平衡式损失函数,称为Meta-Los Network(MLN),用于培训小型数据集的图像分类学习。在我们的方法中,MLN嵌入了作为不同客观功能的分类学习框架中。MLN与进化战略算法(ES)一起演变为优化的损失函数,因此,一个优化以尽量减少这一损失的分类员将取得良好的概括化效果。一个分类员学习了一个小型的培训数据集,以尽量减少与Stochanic 梯子(SGD)的 MLN,然后,MLN随着一个大型验证数据集的小型数据集更新分类器的精确度而演化。为了评估我们的方法,MLN接受了大量从FAshionMNIST取样的小型抽样学习任务的培训,并测试了从FAshionMNIST和CIFAR10取样的验证任务。实验结果表明,MLN有效地改进了与典型交叉性错误和中度平方错误的概括性差。