The surge in interest in Artificial Intelligence (AI) over the past decade has been driven almost exclusively by advances in Artificial Neural Networks (ANNs). While ANNs set state-of-the-art performance for many previously intractable problems, they require large amounts of data and computational resources for training, and since they employ supervised learning they typically need to know the correctly labelled response for every training example, limiting their scalability for real-world domains. Spiking Neural Networks (SNNs) are an alternative to ANNs that use more brain-like artificial neurons and can use unsupervised learning to discover recognizable features in the input data without knowing correct responses. SNNs, however, struggle with dynamical stability and cannot match the accuracy of ANNs. Here we show how an SNN can overcome many of the shortcomings that have been identified in the literature, including offering a principled solution to the vanishing spike problem, to outperform all existing shallow SNNs and equal the performance of an ANN. It accomplishes this while using unsupervised learning with unlabeled data and only 1/50th of the training epochs (labelled data is used only for a final simple linear readout layer). This result makes SNNs a viable new method for fast, accurate, efficient, explainable, and re-deployable machine learning with unlabeled datasets.
翻译:过去十年来,对人工智能的兴趣激增几乎完全是由人工神经网络的进步所驱动的。 人工智能网络几乎完全是由人工神经网络(ANNS)的进步所驱动的。 虽然ANNS为许多以前棘手的问题设定了最先进的性能,但它们需要大量的数据和计算资源来进行培训,而且由于他们使用监督的学习,因此他们通常需要了解每个培训实例的正确标签反应,限制其真实世界域的可缩放性。 Spiking神经网络(SNNS)是替代ANNS的替代方法,它使用更多的像大脑一样的人工神经元,并且可以使用不受监督的学习来发现输入数据中的可识别特征,而没有了解正确的响应。 然而,ANNS在与动态稳定性和计算不匹配的准确性上挣扎着。 我们在这里展示SNNN如何克服文献中发现的许多缺点,包括提供一种消除激增问题的有原则的解决办法,超越所有现有的浅的 SNNS, 和等同 ANN的性能。 它在使用未经校准的学习的同时使用未加固的数据来得出准确性的数据,只有1/50级的升级的数据。