This paper presents a novel method, called Modular Grammatical Evolution (MGE), towards validating the hypothesis that restricting the solution space of NeuroEvolution to modular and simple neural networks enables the efficient generation of smaller and more structured neural networks while providing acceptable (and in some cases superior) accuracy on large data sets. MGE also enhances the state-of-the-art Grammatical Evolution (GE) methods in two directions. First, MGE's representation is modular in that each individual has a set of genes, and each gene is mapped to a neuron by grammatical rules. Second, the proposed representation mitigates two important drawbacks of GE, namely the low scalability and weak locality of representation, towards generating modular and multi-layer networks with a high number of neurons. We define and evaluate five different forms of structures with and without modularity using MGE and find single-layer modules with no coupling more productive. Our experiments demonstrate that modularity helps in finding better neural networks faster. We have validated the proposed method using ten well-known classification benchmarks with different sizes, feature counts, and output class count. Our experimental results indicate that MGE provides superior accuracy with respect to existing NeuroEvolution methods and returns classifiers that are significantly simpler than other machine learning generated classifiers. Finally, we empirically demonstrate that MGE outperforms other GE methods in terms of locality and scalability properties.
翻译:本文介绍了一种新颖的方法,称为“模子形式进化”(MGE),旨在验证以下假设:将神经进化的解决方案空间限于模块化和简单的神经网络,能够有效地生成规模较小和结构更完善的神经网络,同时在大型数据集中提供可接受的(在某些情况下更优)准确性。MGE还从两个方向上强化了最先进的(GE)形式进化方法。首先,MGE的表述是模块化,因为每个个人都有一套基因,每个基因都通过语法规则绘制成神经基因。第二,拟议的表述减轻了GE的两个重要缺陷,即代表性低的缩放和薄弱,从而能够以大量神经元生成模块化和多层网络。我们用MGE来定义和评估五种不同形式的结构,同时不采用模块化的模块进化方法,发现单层模块化模式有助于更快地找到更好的神经网络。我们用十种已知的分类基准和不同的大小、特征计数,以及产出级的可忽略两个重要缺陷。我们现有的GEA级的实验结果表明,我们从其他的变更精确性到更精确性,我们现有的GEGI级的演化方法显示,我们现有的GEGI的演化结果显示了其他的更精确性。