The paper analyses and serves with a positioning of various error measures applied in neural network training and identifies that there is no best of measure, although there is a set of measures with changing superiorities in different learning situations. An outstanding, remarkable measure called $E_{Exp}$ published by Silva and his research partners represents a research direction to combine more measures successfully with fixed importance weighting during learning. The main idea of the paper is to go far beyond and to integrate this relative importance into the neural network training algorithm(s) realized through a novel error measure called $E_{ExpAbs}$. This approach is included into the Levenberg-Marquardt training algorithm, so, a novel version of it is also introduced, resulting a self-adaptive, dynamic learning algorithm. This dynamism does not has positive effects on the resulted model accuracy only, but also on the training process itself. The described comprehensive algorithm tests proved that the proposed, novel algorithm integrates dynamically the two big worlds of statistics and information theory that is the key novelty of the paper.
翻译:在神经网络培训中应用的论文分析和定位了各种错误措施,并确定了没有最佳的衡量方法,尽管有一套措施在不同学习情况中随着优势的变化而变化。Silva及其研究伙伴出版的名为“Exp}$E ⁇ Exp}$ ” 的杰出、显著的计量方法是一个研究方向,目的是在学习期间成功地将更多的计量措施与固定重要性加权法结合起来。论文的主要想法是远远超越这一相对重要性,并将这一相对重要性纳入神经网络培训算法,通过称为“ExpAbs}$”的新误算法实现。这一方法被纳入了Levenberg-Marquardt培训算法,因此,还引入了该方法的新版本,从而产生了一种自我适应和动态的学习算法。这种动态并不只对模型的准确性产生积极影响,而且对培训过程本身也产生积极影响。描述的综合算法测试证明,拟议的、新算法以动态方式融合了两个统计和信息理论的大世界,这是论文的关键新颖之处。