Designing an effective loss function plays a crucial role in training deep recommender systems. Most existing works often leverage a predefined and fixed loss function that could lead to suboptimal recommendation quality and training efficiency. Some recent efforts rely on exhaustively or manually searched weights to fuse a group of candidate loss functions, which is exceptionally costly in computation and time. They also neglect the various convergence behaviors of different data examples. In this work, we propose an AutoLoss framework that can automatically and adaptively search for the appropriate loss function from a set of candidates. To be specific, we develop a novel controller network, which can dynamically adjust the loss probabilities in a differentiable manner. Unlike existing algorithms, the proposed controller can adaptively generate the loss probabilities for different data examples according to their varied convergence behaviors. Such design improves the model's generalizability and transferability between deep recommender systems and datasets. We evaluate the proposed framework on two benchmark datasets. The results show that AutoLoss outperforms representative baselines. Further experiments have been conducted to deepen our understandings of AutoLoss, including its transferability, components and training efficiency.
翻译:设计有效的损失函数在培训深层建议系统方面发挥着关键作用。 多数现有工程往往利用预先确定和固定的损失函数,从而导致建议质量和培训效率低于最佳水平。 最近的一些努力依靠详尽或人工搜索的权重,将一组在计算和时间上费用特别昂贵的候选损失函数整合起来。它们也忽视了不同数据实例的各种趋同行为。在这项工作中,我们提议了一个自动操作框架,可以自动和适应地从一组候选人中寻找适当的损失函数。具体地说,我们开发了一个新颖的控制器网络,这个网络可以动态地以不同的方式调整损失概率。与现有的算法不同,拟议的控制器可以适应性地产生不同数据示例的损失概率,根据它们不同的趋同行为。这种设计提高了模型在深层建议系统和数据集之间的可比较性和可转移性。我们评估了两个基准数据集的拟议框架。结果显示,Autlos超越了有代表性的基准。已经进行了进一步的实验,以加深我们对AutoLoss的理解,包括其可转移性、培训效率和效率。