Artificial Neural Networks (NN) are widely used for solving complex problems from medical diagnostics to face recognition. Despite notable successes, the main disadvantages of NN are also well known: the risk of overfitting, lack of explainability (inability to extract algorithms from trained NN), and high consumption of computing resources. Determining the appropriate specific NN structure for each problem can help overcome these difficulties: Too poor NN cannot be successfully trained, but too rich NN gives unexplainable results and may have a high chance of overfitting. Reducing precision of NN parameters simplifies the implementation of these NN, saves computing resources, and makes the NN skills more transparent. This paper lists the basic NN simplification problems and controlled pruning procedures to solve these problems. All the described pruning procedures can be implemented in one framework. The developed procedures, in particular, find the optimal structure of NN for each task, measure the influence of each input signal and NN parameter, and provide a detailed verbal description of the algorithms and skills of NN. The described methods are illustrated by a simple example: the generation of explicit algorithms for predicting the results of the US presidential election.
翻译:人工神经网络(NN)被广泛用于解决医学诊断的复杂问题,以面对承认。尽管取得了显著的成功,但NN的主要缺点也是众所周知的:超配的风险、缺乏解释性(无法从受过培训的NNN提取算法)和计算资源的高消费。为每个问题确定适当的特定NN结构可以帮助克服这些困难:过于贫穷的NNN无法成功培训,但过于丰富的NNN提供无法解释的结果,而且可能有很大的超配机会。降低NNN参数的精确性简化了这些NN参数的实施,节省了计算资源,并使NNN技能更加透明。本文列举了NN简化的基本问题和有控制的处理程序,以解决这些问题。所有所述的调整程序都可以在一个框架内实施。特别是,为每项任务找到NN的最佳结构,衡量每项输入信号和NN参数的影响,并详细口头描述NN的算法和技能。所描述的方法可以用一个简单的例子来说明:为预测美国选举结果制定明确的总统算法。