In this work, we propose activation functions for neuronal networks that are refinable and sum the identity. This new class of activation function allows the insertion of new layers between existing ones and/or the increase of neurons in a layer, both without altering the network outputs. Our approach is grounded in subdivision theory. The proposed activation functions are constructed from basic limit functions of convergent subdivision schemes. As a showcase of our results, we introduce a family of spline activation functions and provide comprehensive details for their practical implementation.
翻译:暂无翻译