While current deep learning algorithms have been successful for a wide variety of artificial intelligence (AI) tasks, including those involving structured image data, they present deep neurophysiological conceptual issues due to their reliance on the gradients computed by backpropagation of errors (backprop) to obtain synaptic weight adjustments; hence are biologically implausible. We present a more biologically plausible approach, the error-kernel driven activation alignment (EKDAA) algorithm, to train convolution neural networks (CNNs) using locally derived error transmission kernels and error maps. We demonstrate the efficacy of EKDAA by performing the task of visual-recognition on the Fashion MNIST, CIFAR-10 and SVHN benchmarks as well as conducting blackbox robustness tests on adversarial examples derived from these datasets. Furthermore, we also present results for a CNN trained using a non-differentiable activation function. All recognition results nearly matches that of backprop and exhibit greater adversarial robustness compared to backprop.
翻译:虽然目前的深层次学习算法成功地完成了各种各样的人工智能任务,包括涉及结构化图像数据的人工智能任务,但它们提出了深层次的神经生理概念问题,因为其依赖通过背向反射错误(背对准)计算的梯度来获得合成重量调整,因此在生物学上是难以置信的。我们提出了一个更生物上可行的方法,即由错误内核驱动的激活校正算法,用当地出错传输内核和错误地图来培训卷发神经网络。我们通过执行对Fashason MNIST、CIFAR-10和SVHN基准的视觉识别任务,以及对从这些数据集中得出的对抗性实例进行黑盒稳健性测试,显示了EKDA的功效。此外,我们还介绍了使用非差别性的激活功能培训的CNN的结果。所有识别结果都与背向偏差几乎相符,并展示了比反向反向的对抗性强度。