In the last decade, motivated by the success of Deep Learning, the scientific community proposed several approaches to make the learning procedure of Neural Networks more effective. When focussing on the way in which the training data are provided to the learning machine, we can distinguish between the classic random selection of stochastic gradient-based optimization and more involved techniques that devise curricula to organize data, and progressively increase the complexity of the training set. In this paper, we propose a novel training procedure named Friendly Training that, differently from the aforementioned approaches, involves altering the training examples in order to help the model to better fulfil its learning criterion. The model is allowed to simplify those examples that are too hard to be classified at a certain stage of the training procedure. The data transformation is controlled by a developmental plan that progressively reduces its impact during training, until it completely vanishes. In a sense, this is the opposite of what is commonly done in order to increase robustness against adversarial examples, i.e., Adversarial Training. Experiments on multiple datasets are provided, showing that Friendly Training yields improvements with respect to informed data sub-selection routines and random selection, especially in deep convolutional architectures. Results suggest that adapting the input data is a feasible way to stabilize learning and improve the generalization skills of the network.
翻译:在过去十年里,在深层学习的成功推动下,科学界提出了若干方法,使神经网络学习程序更加有效。当侧重于向学习机器提供培训数据的方式时,我们可以区分典型随机选择的随机性梯度优化,以及设计课程以组织数据并逐步增加培训组合复杂性的更多参与技术。在本文件中,我们提出了名为友好培训的新颖培训程序,该程序不同于上述方法,涉及改变培训范例,以帮助模型更好地实现其学习标准。该模式允许简化那些在培训程序某一阶段难以分类的范例。数据转换由一项发展计划加以控制,该计划在培训期间逐步减少其影响,直至完全消失。从某种意义上说,这与通常为增强抵御对抗对抗性范例(即反versarial培训)而采取的做法正好相反。对多个数据集的实验表明,友好培训在知情数据分选常规和随机选择方面带来了改进,特别是在深层次的网络学习过程中,可以使数据输入更加稳定。