`Biologically inspired' activation functions, such as the logistic sigmoid, have been instrumental in the historical advancement of machine learning. However in the field of deep learning, they have been largely displaced by rectified linear units (ReLU) or similar functions, such as its exponential linear unit (ELU) variant, to mitigate the effects of vanishing gradients associated with error back-propagation. The logistic sigmoid however does not represent the true input-output relation in neuronal cells under physiological conditions. Here, bionodal root unit (BRU) activation functions are introduced, exhibiting input-output non-linearities that are substantially more biologically plausible since their functional form is based on known biophysical properties of neuronal cells. In order to evaluate the learning performance of BRU activations, deep networks are constructed with identical architectures except differing in their transfer functions (ReLU, ELU, and BRU). Multilayer perceptrons, stacked auto-encoders, and convolutional networks are used to test supervised and unsupervised learning based on the MNIST and CIFAR-10/100 datasets. Comparisons of learning performance, quantified using loss and error measurements, demonstrate that bionodal networks both train faster than their ReLU and ELU counterparts and result in the best generalised models even in the absence of formal regularisation. These results therefore suggest that revisiting the detailed properties of biological neurones and their circuitry might prove invaluable in the field of deep learning for the future.
翻译:`生物启发'启动功能,如物流类固醇等,在机器学习的历史进步中起了作用,然而,在深层学习领域,由于纠正线性单位(RELU)或类似功能,如指数线性单位(ELU)变体,这些功能在很大程度上被纠正线性单位(RELU)或类似功能(如指数线性单位(ELU)变体)所取代,以减轻与错误背面调节相关的渐变梯度消失的影响。但物流类固醇并不代表生理条件下神经细胞的真正输入输出关系。在这里,引入了生物标值根单位(BRU)启动功能,展示了输入输出非线性非线性,因为其输入-输出非线性能在生物细胞细胞细胞细胞的已知生物物理特性(ReLU)或类似功能的类似功能。为了评价BRUU的激活的学习性能,在基因变异功能(REU、EL和BRRU-10)中,使用最精确的变异性模型测试和不精确性地显示其生物变现结果。