Hyper-parameter Tuning is among the most critical stages in building machine learning solutions. This paper demonstrates how multi-agent systems can be utilized to develop a distributed technique for determining near-optimal values for any arbitrary set of hyper-parameters in a machine learning model. The proposed method employs a distributedly formed hierarchical agent-based architecture for the cooperative searching procedure of tuning hyper-parameter values. The presented generic model is used to develop a guided randomized agent-based tuning technique, and its behavior is investigated in both machine learning and global function optimization applications. According the empirical results, the proposed model outperformed both of its underlying randomized tuning strategies in terms of classification error and function evaluations, notably in higher number of dimensions.
翻译:超参数图灵是建立机器学习解决方案的最关键阶段之一。本文展示了如何利用多试剂系统开发一种分布式技术,用于确定机器学习模型中任何一套任意性超参数的近最佳值。拟议方法使用一个分布式的等级代理结构,用于调整超参数值的合作搜索程序。介绍的通用模型用于开发一种有指导的随机化代理调试技术,其行为在机器学习和全球功能优化应用中都得到了调查。根据经验,拟议的模型在分类错误和功能评估方面,特别是数量较大的方面,都超过了其基本随机调试战略。