In this paper, we develop upon the emerging topic of loss function learning, which aims to learn loss functions that significantly improve the performance of the models trained under them. Specifically, we propose a new meta-learning framework for learning model-agnostic loss functions via a hybrid neuro-symbolic search approach. The framework first uses evolution-based methods to search the space of primitive mathematical operations to find a set of symbolic loss functions. Second, the set of learned loss functions are subsequently parameterized and optimized via an end-to-end gradient-based training procedure. The versatility of the proposed framework is empirically validated on a diverse set of supervised learning tasks. Results show that the meta-learned loss functions discovered by the newly proposed method outperform both the cross-entropy loss and state-of-the-art loss function learning methods on a diverse range of neural network architectures and datasets.
翻译:在本文中,我们发展了正在兴起的损失函数学习主题,旨在学习显着改善模型训练性能的损失函数。具体来说,我们提出了一种新的元学习框架,以混合神经符号搜索方法学习模型无关的损失函数。该框架首先使用基于进化的方法搜索原始数学操作空间以找到一组符号损失函数。其次,学习到的损失函数集合随后通过端到端的梯度训练过程进行参数化和优化。所提出的框架的多功能性在各种监督学习任务上经过了实证验证。结果表明,由新提出的方法发现的元学习损失函数在各种神经网络架构和数据集上优于交叉熵损失和最先进的损失函数学习方法。