Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.
翻译:查找图形的最大切片( MAXCUT) 是一个典型的优化问题, 引发了平行算法的发展。 虽然 MAXCUT 的近似算法提供了有吸引力的理论保障, 并展示了令人信服的实证性表现, 但这种近似方法可以将主要计算成本转换为随机抽样作业。 神经畸形计算, 使用神经系统的组织原理来激励新的平行计算结构, 提供了一种可能的解决办法。 自然大脑的一个无处不在的特征是随机性: 生物神经网络的个别元素具有内在随机性, 作为一种使其具有独特计算能力的资源。 我们通过设计使用与自然大脑相似的随机性的电路和算法, 我们假设微电子装置的内在随机性可以转化为神经形态结构中有价值的组成部分, 从而促成新的平行计算。 我们展示了这些电路与软件解算器的相对性, 并论证了将神经形态结构的内在结构化原理转化为神经形态的计算能力。