As the accuracy of machine learning models increases at a fast rate, so does their demand for energy and compute resources. On a low level, the major part of these resources is consumed by data movement between different memory units. Modern hardware architectures contain a form of fast memory (e.g., cache, registers), which is small, and a slow memory (e.g., DRAM), which is larger but expensive to access. We can only process data that is stored in fast memory, which incurs data movement (input/output-operations, or I/Os) between the two units. In this paper, we provide a rigorous theoretical analysis of the I/Os needed in sparse feedforward neural network (FFNN) inference. We establish bounds that determine the optimal number of I/Os up to a factor of 2 and present a method that uses a number of I/Os within that range. Much of the I/O-complexity is determined by a few high-level properties of the FFNN (number of inputs, outputs, neurons, and connections), but if we want to get closer to the exact lower bound, the instance-specific sparsity patterns need to be considered. Departing from the 2-optimal computation strategy, we show how to reduce the number of I/Os further with simulated annealing. Complementing this result, we provide an algorithm that constructively generates networks with maximum I/O-efficiency for inference. We test the algorithms and empirically verify our theoretical and algorithmic contributions. In our experiments on real hardware we observe speedups of up to 45$\times$ relative to the standard way of performing inference.
翻译:随着机器学习模型的准确性快速提高,对能量和计算资源的需求也迅速增加。在低水平上,这些资源的主要部分被不同记忆单位之间的数据移动所消耗。现代硬件结构包含一种小的快速内存(例如缓存、登记册)形式,而内存(例如DRAM)则比较大,但费用昂贵。我们只能处理储存在快速内存中的数据,这在两个单位之间引起数据移动(投入/产出操作,或I/O)。在本文中,我们提供了对分散的饲料前神经网络(FFNNN)所需的I/O的严格理论分析。我们建立界限,确定I/O的最佳数目,最高但提出使用该范围内的一号I/O数的方法。I/O的兼容性大部分由两个高层次的O值数据来决定(我们输入、产出、神经和连接的数),但是如果我们想更接近O的测试型号,我们如何在实际的计算中显示一个低端内值。我们用来计算结果,那么,我们对I/O的计算结果的数值将比重。