There is a growing interest in automated neural architecture search (NAS) methods. They are employed to routinely deliver high-quality neural network architectures for various challenging data sets and reduce the designer's effort. The NAS methods utilizing multi-objective evolutionary algorithms are especially useful when the objective is not only to minimize the network error but also to minimize the number of parameters (weights) or power consumption of the inference phase. We propose a multi-objective NAS method based on Cartesian genetic programming for evolving convolutional neural networks (CNN). The method allows approximate operations to be used in CNNs to reduce the power consumption of a target hardware implementation. During the NAS process, a suitable CNN architecture is evolved together with approximate multipliers to deliver the best trade-offs between the accuracy, network size, and power consumption. The most suitable approximate multipliers are automatically selected from a library of approximate multipliers. Evolved CNNs are compared with common human-created CNNs of a similar complexity on the CIFAR-10 benchmark problem.
翻译:对自动神经结构搜索(NAS)方法的兴趣日益浓厚,用于为各种具有挑战性的数据集例行提供高质量的神经网络结构,并减少设计者的努力。NAS方法使用多目标进化算法,在目的不仅是为了尽量减少网络错误,而且是为了尽量减少推论阶段参数(重量)或能量消耗的数量时,特别有用。我们提议了一种基于卡尔提西亚遗传方案、用于不断发展的卷发神经网络的多目标NAS方法。这种方法允许CNN使用近似操作来减少目标硬件的能量消耗。在NAS过程中,适当的CNN结构与近似乘数一起演进,以提供准确性、网络大小和电耗之间的最佳权衡。最合适的近似乘数是从近似倍数的图书馆自动挑选出来的。Evolved WNIS与CFAR-10基准问题具有类似复杂性的普通人造CNN有线网络比较。