Existing convolution techniques in artificial neural networks suffer from huge computation complexity, while the biological neural network works in a much more powerful yet efficient way. Inspired by the biological plasticity of dendritic topology and synaptic strength, our method, Learnable Heterogeneous Convolution, realizes joint learning of kernel shape and weights, which unifies existing handcrafted convolution techniques in a data-driven way. A model based on our method can converge with structural sparse weights and then be accelerated by devices of high parallelism. In the experiments, our method either reduces VGG16/19 and ResNet34/50 computation by nearly 5x on CIFAR10 and 2x on ImageNet without harming the performance, where the weights are compressed by 10x and 4x respectively; or improves the accuracy by up to 1.0% on CIFAR10 and 0.5% on ImageNet with slightly higher efficiency. The code will be available on www.github.com/Genera1Z/LearnableHeterogeneousConvolution.
翻译:人工神经网络中现有的神经网络的演化技术具有巨大的计算复杂性,而生物神经网络则以更强大、更高效的方式运作。在登地层地形学和合成强度的生物可塑性激励下,我们的方法,即可学习的异质演化方法,实现了内核形状和重量的联合学习,以数据驱动的方式将现有的手工制作的演化技术统一起来。以我们的方法为基础的模型可以与结构稀疏的重量汇合,然后通过高平行装置加速。在实验中,我们的方法要么将CIFAR10和ResNet34/50的VGG16/19和ResNet34/50计算减少近5x,但不影响性能,即重量分别压缩10x和4x;或者以略高的效率提高CIFAR10的1.0%和图像网络的0.5%的精确度。代码将在 www.github.com/Genera1Z/LearnableHegenciental上查阅。