Neuroevolutionary algorithms, automatic searches of neural network structures by means of evolutionary techniques, are computationally costly procedures. In spite of this, due to the great performance provided by the architectures which are found, these methods are widely applied. The final outcome of neuroevolutionary processes is the best structure found during the search, and the rest of the procedure is commonly omitted in the literature. However, a good amount of residual information consisting of valuable knowledge that can be extracted is also produced during these searches. In this paper, we propose an approach that extracts this information from neuroevolutionary runs, and use it to build a metamodel that could positively impact future neural architecture searches. More specifically, by inspecting the best structures found during neuroevolutionary searches of generative adversarial networks with varying characteristics (e.g., based on dense or convolutional layers), we propose a Bayesian network-based model which can be used to either find strong neural structures right away, conveniently initialize different structural searches for different problems, or help future optimization of structures of any type to keep finding increasingly better structures where uninformed methods get stuck into local optima.
翻译:神经进化算法,即通过进化技术自动搜索神经网络结构的神经进化算法,是计算成本高昂的程序。尽管如此,由于所发现的结构所提供的巨大性能,这些方法被广泛应用。神经进化过程的最终结果是搜索过程中发现的最佳结构,文献中通常遗漏了该过程的其余部分。然而,在这些搜索过程中也产生了大量残余信息,其中包括可以提取的宝贵知识。在本文件中,我们提出了一个方法,从神经进化运行中提取这些信息,并利用它来构建一个能够积极影响未来神经结构搜索的元模型。更具体地说,通过检查在神经进化搜索具有不同特点(例如,基于稠密或进化层)的基因进化对抗网络过程中发现的最佳结构,我们提出了一个基于巴耶斯网络的模型,可以用来立即找到强大的神经结构结构,方便地为不同的问题进行不同的结构搜索,或者帮助今后优化任何类型的结构,以找到日益完善的结构,因为不知情的方法被困在本地选择中。