With neural architecture search methods gaining ground on manually designed deep neural networks -even more rapidly as model sophistication escalates-, the research trend shifts towards arranging different and often increasingly complex neural architecture search spaces. In this conjuncture, delineating algorithms which can efficiently explore these search spaces can result in a significant improvement over currently used methods, which, in general, randomly select the structural variation operator, hoping for a performance gain. In this paper, we investigate the effect of different variation operators in a complex domain, that of multi-network heterogeneous neural models. These models have an extensive and complex search space of structures as they require multiple sub-networks within the general model in order to answer to different output types. From that investigation, we extract a set of general guidelines, whose application is not limited to that particular type of model, and are useful to determine the direction in which an architecture optimization method could find the largest improvement. To deduce the set of guidelines, we characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.
翻译:随着神经结构搜索方法在人工设计的深神经网络上获得立足点 -- -- 随着模型精密程度的升级,神经结构搜索方法甚至更加迅速 -- -- 研究趋势转向安排不同而且往往日益复杂的神经结构搜索空间。在这一结合中,能够有效探索这些搜索空间的算法可以大大改进目前使用的方法,一般而言,这些算法随机选择结构变异操作器,希望取得性能收益。在本文件中,我们调查了一个复杂领域的不同变异操作器的影响,即多网络异神经模型的影响。这些模型拥有一个广泛而复杂的结构搜索空间,因为它们需要通用模型中多个子网络来应对不同产出类型。我们从这一调查中提取出一套通用指南,其应用不限于这一特定类型的模型,而且有助于确定一个结构优化方法能够找到最大改进的方向。为了推断这套指南,我们根据对模型的复杂性和性能的影响,对不同的变异操作器进行定性;以及模型,依靠不同指标来估计构成该模型的不同部分的质量。