多任务学习(MTL)是机器学习的一个子领域,可以同时解决多个学习任务,同时利用各个任务之间的共性和差异。与单独训练模型相比,这可以提高特定任务模型的学习效率和预测准确性。多任务学习是归纳传递的一种方法,它通过将相关任务的训练信号中包含的域信息用作归纳偏差来提高泛化能力。通过使用共享表示形式并行学习任务来实现,每个任务所学的知识可以帮助更好地学习其它任务。

VIP内容

证据回归网络(ENet)估计一个连续目标及其预测的不确定性,无需昂贵的贝叶斯模型平均。然而,由于ENet原始损失函数的梯度收缩问题,即负对数边际似然损失,有可能导致目标预测不准确。本文的目标是通过解决梯度收缩问题来提高ENet的预测精度,同时保持其有效的不确定性估计。一个多任务学习(MTL)框架,被称为MT-ENet,被提出来实现这一目标。在MTL中,我们将Lipschitz修正均方误差(MSE)损失函数定义为另一种损失,并将其添加到现有的NLL损失中。设计了Lipschitz修正均方误差损失,通过动态调整其Lipschitz常数来缓解梯度与NLL损失之间的冲突。这样,李普希茨均方误差损失不影响NLL损失的不确定性估计。MT-ENet提高了ENet的预测精度,同时在合成数据集和现实基准上,包括药物-目标亲和(DTA)回归,不丧失不确定性估计能力。此外,MT-ENet在DTA基准上具有显著的校准和非分布检测能力。

https://www.zhuanzhi.ai/paper/c91e28221315b8539ea96695b53146dc

成为VIP会员查看完整内容
0
11

最新论文

Controlling the model to generate texts of different categories is a challenging task that is receiving increasing attention. Recently, generative adversarial networks (GANs) have shown promising results for category text generation. However, the texts generated by GANs usually suffer from problems of mode collapse and training instability. To avoid the above problems, in this study, inspired by multi-task learning, a novel model called category-aware variational recurrent neural network (CatVRNN) is proposed. In this model, generation and classification tasks are trained simultaneously to generate texts of different categories. The use of multi-task learning can improve the quality of the generated texts, when the classification task is appropriate. In addition, a function is proposed to initialize the hidden state of the CatVRNN to force the model to generate texts of a specific category. Experimental results on three datasets demonstrate that the model can outperform state-of-the-art text generation methods based on GAN in terms of diversity of generated texts.

0
0
下载
预览
参考链接
父主题
Top
微信扫码咨询专知VIP会员