Hypercomplex neural networks have proven to reduce the overall number of parameters while ensuring valuable performance by leveraging the properties of Clifford algebras. Recently, hypercomplex linear layers have been further improved by involving efficient parameterized Kronecker products. In this paper, we define the parameterization of hypercomplex convolutional layers and introduce the family of parameterized hypercomplex neural networks (PHNNs) that are lightweight and efficient large-scale models. Our method grasps the convolution rules and the filter organization directly from data without requiring a rigidly predefined domain structure to follow. PHNNs are flexible to operate in any user-defined or tuned domain, from 1D to $n$D regardless of whether the algebra rules are preset. Such a malleability allows processing multidimensional inputs in their natural domain without annexing further dimensions, as done, instead, in quaternion neural networks for 3D inputs like color images. As a result, the proposed family of PHNNs operates with $1/n$ free parameters as regards its analog in the real domain. We demonstrate the versatility of this approach to multiple domains of application by performing experiments on various image datasets as well as audio datasets in which our method outperforms real and quaternion-valued counterparts. Full code is available at: https://github.com/eleGAN23/HyperNets.
翻译:超复合神经网络已经证明能够通过利用克里福德代数的特性来确保有价值的性能,从而减少参数总数,同时通过利用克里福德代数的特性确保有价值的性能。最近,超复合线性层通过高效参数化的克朗克尔产品得到了进一步的改进。在本文件中,我们定义了超复合卷变层的参数化,并引入了一组参数化超复合神经网络(超复合神经网络),这些网络是轻量和高效的大型模型。我们的方法直接从数据中获取聚合规则和过滤组织,而不需要严格预先定义的域结构来跟踪。PHNNNN可以灵活地在任何用户定义或调整的域内运行,从1D到$n$D,而不管是否预设了代数。这样,我们就可以在自然域内处理多维维度的多维度投入,而不必像所做的那样,在3D输入如彩色图像的节心神经网络中进行。结果,PHNNN在实际域内使用1美元/美元的自由参数运行。我们展示了这个数字的多端端点方法,在多种图像上可以应用。