Modularity is essential to many well-performing structured systems, as it is a useful means of managing complexity [8]. An analysis of modularity in neural networks produced by machine learning algorithms can offer valuable insight into the workings of such algorithms and how modularity can be leveraged to improve performance. However, this property is often overlooked in the neuroevolutionary literature, so the modular nature of many learning algorithms is unknown. This property was assessed on the popular algorithm "NeuroEvolution of Augmenting Topologies" (NEAT) for standard simulation benchmark control problems due to NEAT's ability to optimise network topology. This paper shows that NEAT networks seem to rapidly increase in modularity over time with the rate and convergence dependent on the problem. Interestingly, NEAT tends towards increasingly modular networks even when network fitness converges. It was shown that the ideal level of network modularity in the explored parameter space is highly dependent on other network variables, dispelling theories that modularity has a straightforward relationship to network performance. This is further proven in this paper by demonstrating that rewarding modularity directly did not improve fitness.
翻译:模块性对于许多运作良好的结构化系统至关重要,因为它是管理复杂程度的有用手段[8]。通过机器学习算法对神经网络模块性进行的分析,可以对这种算法的运作以及如何利用模块性来提高性能提供有价值的洞察力。然而,神经进化文献中往往忽略了这一属性,因此许多学习算法的模块性是未知的。这一属性是根据通用算法“扩大地形变异” (NEAT)评估的,用于标准模拟基准控制问题,因为NEAT有能力优化网络的地形学。本文表明,NEAT网络的模块性随着时间的推移似乎会随着取决于问题的速度和趋同而迅速增加。有趣的是,NEAT倾向于日益模块化的网络,即使网络的适配性趋于一致,因此许多学习算法的模块性是未知的。被探索的参数空间的理想网络模块性水平高度依赖其他网络变量,消除了模块性与网络性能直接关联的理论。本文进一步证明了这一点,因为奖励模块性直接没有改善机健性。