Designing effective neural networks requires tuning architectural elements. This study integrates fractional calculus into neural networks by introducing fractional order derivatives (FDO) as tunable parameters in activation functions, allowing diverse activation functions by adjusting the FDO. We evaluate these fractional activation functions on various datasets and network architectures, comparing their performance with traditional and new activation functions. Our experiments assess their impact on accuracy, time complexity, computational overhead, and memory usage. Results suggest fractional activation functions, particularly fractional Sigmoid, offer benefits in some scenarios. Challenges related to consistency and efficiency remain. Practical implications and limitations are discussed.
翻译:暂无翻译