过拟合,在AI领域多指机器学习得到模型太过复杂,导致在训练集上表现很好,然而在测试集上却不尽人意。过拟合(over-fitting)也称为过学习,它的直观表现是算法在训练集上表现好,但在测试集上表现不好,泛化性能差。过拟合是在模型参数拟合过程中由于训练数据包含抽样误差,在训练时复杂的模型将抽样误差也进行了拟合导致的。

VIP内容

题目: Time Series Data Augmentation for Deep Learning: A Survey

摘要:

近年来,深度学习在许多时间序列分析任务中表现优异。深度神经网络的优越性能很大程度上依赖于大量的训练数据来避免过拟合。然而,许多实际时间序列应用的标记数据可能会受到限制,如医学时间序列的分类和AIOps中的异常检测。数据扩充是提高训练数据规模和质量的有效途径,是深度学习模型在时间序列数据上成功应用的关键。本文系统地综述了时间序列的各种数据扩充方法。我们为这些方法提出了一个分类,然后通过强调它们的优点和局限性为这些方法提供了一个结构化的审查。并对时间序列异常检测、分类和预测等不同任务的数据扩充方法进行了实证比较。最后,我们讨论并强调未来的研究方向,包括时频域的数据扩充、扩充组合、不平衡类的数据扩充与加权。

成为VIP会员查看完整内容
0
86

最新内容

How to effectively adapt neural machine translation (NMT) models according to emerging cases without retraining? Despite the great success of neural machine translation, updating the deployed models online remains a challenge. Existing non-parametric approaches that retrieve similar examples from a database to guide the translation process are promising but are prone to overfit the retrieved examples. However, non-parametric methods are prone to overfit the retrieved examples. In this work, we propose to learn Kernel-Smoothed Translation with Example Retrieval (KSTER), an effective approach to adapt neural machine translation models online. Experiments on domain adaptation and multi-domain machine translation datasets show that even without expensive retraining, KSTER is able to achieve improvement of 1.1 to 1.5 BLEU scores over the best existing online adaptation methods. The code and trained models are released at https://github.com/jiangqn/KSTER.

0
0
下载
预览

最新论文

How to effectively adapt neural machine translation (NMT) models according to emerging cases without retraining? Despite the great success of neural machine translation, updating the deployed models online remains a challenge. Existing non-parametric approaches that retrieve similar examples from a database to guide the translation process are promising but are prone to overfit the retrieved examples. However, non-parametric methods are prone to overfit the retrieved examples. In this work, we propose to learn Kernel-Smoothed Translation with Example Retrieval (KSTER), an effective approach to adapt neural machine translation models online. Experiments on domain adaptation and multi-domain machine translation datasets show that even without expensive retraining, KSTER is able to achieve improvement of 1.1 to 1.5 BLEU scores over the best existing online adaptation methods. The code and trained models are released at https://github.com/jiangqn/KSTER.

0
0
下载
预览
Top