The automated machine learning (AutoML) field has become increasingly relevant in recent years. These algorithms can develop models without the need for expert knowledge, facilitating the application of machine learning techniques in the industry. Neural Architecture Search (NAS) exploits deep learning techniques to autonomously produce neural network architectures whose results rival the state-of-the-art models hand-crafted by AI experts. However, this approach requires significant computational resources and hardware investments, making it less appealing for real-usage applications. This article presents the third version of Pareto-Optimal Progressive Neural Architecture Search (POPNASv3), a new sequential model-based optimization NAS algorithm targeting different hardware environments and multiple classification tasks. Our method is able to find competitive architectures within large search spaces, while keeping a flexible structure and data processing pipeline to adapt to different tasks. The algorithm employs Pareto optimality to reduce the number of architectures sampled during the search, drastically improving the time efficiency without loss in accuracy. The experiments performed on images and time series classification datasets provide evidence that POPNASv3 can explore a large set of assorted operators and converge to optimal architectures suited for the type of data provided under different scenarios.
翻译:近年来,自动化机器学习(Automal)领域变得日益重要。这些算法可以在不需要专业知识的情况下开发模型,促进在该行业应用机器学习技术。神经建筑搜索(NAS)利用深层次学习技术自主生成神经网络结构,其结果与AI专家手工制作的最先进的模型相匹配。然而,这一方法需要大量的计算资源和硬件投资,从而降低对实际使用应用的吸引力。本文章展示了第三版Pareto-Optimal 进步神经建筑搜索(POPNAASv3),这是针对不同硬件环境和多种分类任务的新的基于序列的模型优化NAS算法。我们的方法能够在大型搜索空间找到具有竞争力的结构,同时保持灵活的结构和数据处理管道以适应不同的任务。该算法利用了Pareto最优化的方法来减少在搜索中抽样的建筑数量,极大地提高了时间效率,而不会造成准确的损失。在图像和时间序列分类数据集中进行的实验提供了证据,表明POPNASV3可以探索大量不同分类型操作者,并且能够将不同数据类型置于最佳结构之下。