Hardware-based neuromorphic computing remains an elusive goal with the potential to profoundly impact future technologies and deepen our understanding of emergent intelligence. The learning-from-mistakes algorithm is one of the few training algorithms inspired by the brain's simple learning rules, utilizing inhibition and pruning to demonstrate self-organized learning. Here we implement this algorithm in purely neuromorphic memristive hardware through a co-design process. This implementation requires evaluating hardware trade-offs and constraints. It has been shown that learning-from-mistakes successfully trains small networks to function as binary classifiers and perceptrons. However, without tailoring the hardware to the algorithm, performance decreases exponentially as the network size increases. When implementing neuromorphic algorithms on neuromorphic hardware, we investigate the trade-offs between depth, controllability, and capacity, the latter being the number of learnable patterns. We emphasize the significance of topology and the use of governing equations, demonstrating theoretical tools to aid in the co-design of neuromorphic hardware and algorithms. We provide quantitative techniques to evaluate the computational capacity of a neuromorphic device based on the measurements performed and the underlying circuit structure. This approach shows that breaking the symmetry of a neural network can increase both the controllability and average network capacity. By pruning the circuit, neuromorphic algorithms in all-memristive device circuits leverage stochastic resources to drive local contrast in network weights. Our combined experimental and simulation efforts explore the parameters that make a network suited for displaying emergent intelligence from simple rules.
翻译:暂无翻译