The massive use of artificial neural networks (ANNs), increasingly popular in many areas of scientific computing, rapidly increases the energy consumption of modern high-performance computing systems. An appealing and possibly more sustainable alternative is provided by novel neuromorphic paradigms, which directly implement ANNs in hardware. However, little is known about the actual benefits of running ANNs on neuromorphic hardware for use cases in scientific computing. Here we present a methodology for measuring the energy cost and compute time for inference tasks with ANNs on conventional hardware. In addition, we have designed an architecture for these tasks and estimate the same metrics based on a state-of-the-art analog in-memory computing (AIMC) platform, one of the key paradigms in neuromorphic computing. Both methodologies are compared for a use case in quantum many-body physics in two dimensional condensed matter systems and for anomaly detection at 40 MHz rates at the Large Hadron Collider in particle physics. We find that AIMC can achieve up to one order of magnitude shorter computation times than conventional hardware, at an energy cost that is up to three orders of magnitude smaller. This suggests great potential for faster and more sustainable scientific computing with neuromorphic hardware.
翻译:大量使用在科学计算的许多领域日益流行的人工神经网络(ANNs),迅速增加了现代高性能计算机系统的能源消耗。新颖的神经形态模式提供了一种吸引人和可能更可持续的替代方法,这些新颖的神经形态模式直接在硬件中应用ANNs。然而,对在神经形态硬件上运行ANNs的实际好处知之甚少,用于科学计算案例。这里我们介绍了一种测量能源成本和计算时间的方法,用于在常规硬件上与ANNs进行推断任务。此外,我们设计了一个用于这些任务的架构,并估算了基于最先进的模拟模拟模拟计算机(AIMC)平台(神经形态计算中的关键范例之一)的相同指标。这两种方法都比较了两种方法,用于在两个维位浓缩物质系统中的量体物理中进行量体物理应用,以及用于在粒子物理学大型哈德伦对离心机进行40兆赫的异常检测。我们发现,AIMC可以达到比常规硬件更短的一等级计算时间,而能源成本则高达三个级。这可以表明,神经结构具有更大的快速、更可持续的可能性。