Machine Learning (ML) algorithms have been increasingly applied to problems from several different areas. Despite their growing popularity, their predictive performance is usually affected by the values assigned to their hyperparameters (HPs). As consequence, researchers and practitioners face the challenge of how to set these values. Many users have limited knowledge about ML algorithms and the effect of their HP values and, therefore, do not take advantage of suitable settings. They usually define the HP values by trial and error, which is very subjective, not guaranteed to find good values and dependent on the user experience. Tuning techniques search for HP values able to maximize the predictive performance of induced models for a given dataset, but have the drawback of a high computational cost. Thus, practitioners use default values suggested by the algorithm developer or by tools implementing the algorithm. Although default values usually result in models with acceptable predictive performance, different implementations of the same algorithm can suggest distinct default values. To maintain a balance between tuning and using default values, we propose a strategy to generate new optimized default values. Our approach is grounded on a small set of optimized values able to obtain predictive performance values better than default settings provided by popular tools. After performing a large experiment and a careful analysis of the results, we concluded that our approach delivers better default values. Besides, it leads to competitive solutions when compared to tuned values, making it easier to use and having a lower cost. We also extracted simple rules to guide practitioners in deciding whether to use our new methodology or a HP tuning approach.
翻译:机器学习( ML) 算法越来越多地应用于多个不同领域的问题。 尽管它们越来越受欢迎, 它们的预测性能通常会受到超参数(HPs)所指定值的影响。 因此, 研究人员和从业者面临如何设置这些值的挑战。 许多用户对ML算法及其HP值的影响了解有限, 因而无法利用适当的设置。 它们通常会通过试验和错误来定义HP值, 这是非常主观的, 无法保证找到好的值, 并且取决于用户的经验。 教学技术搜索HP值, 能够最大限度地提高一个特定数据集的导出模型的预测性能, 但它们的预测性能通常会受到影响。 因此, 实践者会使用由算法或由执行算法的工具所推荐的默认值。 虽然默认值通常导致模型具有可接受的预测性, 但不同的算法可以显示不同的默认值。 为了在调控值与使用默认值之间保持一种平衡, 我们提出一种新的最优化默认值策略。 我们的方法建立在一套小的精细的值上, 能够对一个更精确的值进行优化的值, 进行更精确地分析, 进行更精确地分析。