In this paper, we develop upon the emerging topic of loss function learning, which aims to learn loss functions that significantly improve the performance of the models trained under them. Specifically, we propose a new meta-learning framework for learning model-agnostic loss functions via a hybrid neuro-symbolic search approach. The framework first uses evolution-based methods to search the space of primitive mathematical operations to find a set of symbolic loss functions. Second, the set of learned loss functions are subsequently parameterized and optimized via an end-to-end gradient-based training procedure. The versatility of the proposed framework is empirically validated on a diverse set of supervised learning tasks. Results show that the meta-learned loss functions discovered by the newly proposed method outperform both the cross-entropy loss and state-of-the-art loss function learning methods on a diverse range of neural network architectures and datasets.
翻译:在本文中,我们根据新出现的损失函数学习专题发展,目的是学习能够大大改进所培训模型性能的损失功能。具体地说,我们提出一个新的元学习框架,以便通过混合神经-同步搜索方法来学习模型-不可知性损失功能。框架首先使用基于进化的方法搜索原始数学操作的空间,以寻找一套象征性损失功能。第二,随后通过基于尾端至端梯度的培训程序,对一组已学过的损失功能进行参数化和优化。拟议框架的多功能性在一系列不同的监督学习任务上得到了经验验证。结果显示,新提出的方法所发现的元损失功能在各种神经网络架构和数据集上超越了跨天体损失和最先进的损失功能学习方法。