Neural network designers have reached progressive accuracy by increasing models depth, introducing new layer types and discovering new combinations of layers. A common element in many architectures is the distribution of the number of filters in each layer. Neural network models keep a pattern design of increasing filters in deeper layers such as those in LeNet, VGG, ResNet, MobileNet and even in automatic discovered architectures such as NASNet. It remains unknown if this pyramidal distribution of filters is the best for different tasks and constrains. In this work we present a series of modifications in the distribution of filters in four popular neural network models and their effects in accuracy and resource consumption. Results show that by applying this approach, some models improve up to 8.9% in accuracy showing reductions in parameters up to 54%.
翻译:神经网络设计师通过增加模型深度、引入新的层型类型和发现新的层组合,逐渐实现了精度。 许多结构中的一个共同要素是每个层的过滤器数量的分布。 神经网络模型保持了一个在更深层增加过滤器的模型设计, 如在LeNet、 VGG、ResNet、 移动网络, 甚至在自动发现的架构中, 如NASNet。 仍然不清楚过滤器的这种金字塔式分布是否最适合不同的任务和制约。 在这项工作中,我们展示了四个流行的神经网络模型过滤器分布的一系列修改及其在准确性和资源消耗方面的效果。 结果显示,通过应用这一方法,一些模型提高了8.9%的准确度,显示54%参数的削减率。