The impressive performance of artificial neural networks has come at the cost of high energy usage and CO$_2$ emissions. Unconventional computing architectures, with magnetic systems as a candidate, have potential as alternative energy-efficient hardware, but, still face challenges, such as stochastic behaviour, in implementation. Here, we present a methodology for exploiting the traditionally detrimental stochastic effects in magnetic domain-wall motion in nanowires. We demonstrate functional binary stochastic synapses alongside a gradient learning rule that allows their training with applicability to a range of stochastic systems. The rule, utilising the mean and variance of the neuronal output distribution, finds a trade-off between synaptic stochasticity and energy efficiency depending on the number of measurements of each synapse. For single measurements, the rule results in binary synapses with minimal stochasticity, sacrificing potential performance for robustness. For multiple measurements, synaptic distributions are broad, approximating better-performing continuous synapses. This observation allows us to choose design principles depending on the desired performance and the device's operational speed and energy cost. We verify performance on physical hardware, showing it is comparable to a standard neural network.
翻译:人造神经网络令人印象深刻的性能是以高能使用和二氧化碳排放为代价的。以磁系统为候选条件的非常规计算结构具有替代节能硬件的潜力,但在执行过程中仍面临诸如随机行为等挑战。在这里,我们提出了一个方法,用以利用纳米线磁场壁运动中传统上有害的随机效应。我们展示了功能性二进制神经突触以及一种梯度学习规则,使这些结构能够适用于一系列随机系统。使用神经输出分布的平均值和差异的规则,根据每种突触的测量数量,在合成孔和能效之间找到一种平衡。对于单一测量,规则的结果是在磁场壁壁壁运动中产生二进制神经突触效应,从而牺牲了潜在的坚固性。对于多种测量而言,合成分布是广泛、相适应性更强的连续神经突触力分布。这一观察使我们得以根据所期望的网络性能和速度选择设计原理,我们根据所期望的硬性能来验证一个可比较的硬性能和装置。</s>