Hyper-parameter optimization is one of the most tedious yet crucial steps in training machine learning models. There are numerous methods for this vital model-building stage, ranging from domain-specific manual tuning guidelines suggested by the oracles to the utilization of general-purpose black-box optimization techniques. This paper proposes an agent-based collaborative technique for finding near-optimal values for any arbitrary set of hyper-parameters (or decision variables) in a machine learning model (or general function optimization problem). The developed method forms a hierarchical agent-based architecture for the distribution of the searching operations at different dimensions and employs a cooperative searching procedure based on an adaptive width-based random sampling technique to locate the optima. The behavior of the presented model, specifically against the changes in its design parameters, is investigated in both machine learning and global function optimization applications, and its performance is compared with that of two randomized tuning strategies that are commonly used in practice. According to the empirical results, the proposed model outperformed the compared methods in the experimented classification, regression, and multi-dimensional function optimization tasks, notably in a higher number of dimensions and in the presence of limited on-device computational resources.
翻译:超参数优化是培训机器学习模型中最乏味但又最关键的步骤之一。在这个至关重要的模型建设阶段,有许多方法,从神器建议的针对特定领域的手工调整准则到一般用途黑盒优化技术的利用。本文件建议采用一种基于代理的合作技术,在机器学习模型(或一般功能优化问题)中找到任何一套任意的超参数(或决定变量)的近最佳值。发达的方法形成了一个基于等级的代理结构,在不同层面分配搜索作业,并采用基于适应性宽度随机抽样技术的合作搜索程序,以定位optima。所提出的模型的行为,特别是针对其设计参数的变化,在机器学习和全球功能优化应用中进行调查,其性能与在实践中常用的两种随机调整战略进行比较。根据经验结果,拟议的模型超越了实验性分类、回归和多维功能优化任务中的比较方法,特别是在较高层面和有限的计算资源存在的情况下。</s>