Predictions combination, as a combination model approach with adjustments in the output space, has flourished in recent years in research and competitions. Simple average is intuitive and robust, and is often used as a benchmark in predictions combination. However, some poorly performing sub-models can reduce the overall accuracy because the sub-models are not selected in advance. Even though some studies have selected the top sub-models for the combination after ranking them by mean square error, the covariance of them causes this approach to not yield much benefit. In this paper, we suggest to consider the diversity of sub-models in the predictions combination, which can be adopted to assist in selecting the most diverse model subset in the model pool using negative correlation learning. Three publicly available datasets are applied to evaluate the approach. The experimental results not only show the diversity of sub-models in the predictions combination incorporating negative correlation learning, but also produce predictions with accuracy far exceeding that of the simple average benchmark and some weighted average methods. Furthermore, by adjusting the penalty strength for negative correlation, the predictions combination also outperform the best sub-model. The value of this paper lies in its ease of use and effectiveness, allowing the predictions combination to embrace both diversity and accuracy.
翻译:近年来,在研究和竞争中,作为调整产出空间的混合模型方法的预测组合,近年来在研究和竞争中蓬勃发展。简单平均数是直观和稳健的,常常用作预测组合的基准。然而,一些表现不佳的子模型可以降低总体准确性,因为子模型没有事先选定。尽管有些研究选择了在按平均平方差进行排序后进行组合的顶级子模型,但它们的共变性使得这种方法没有产生多大效益。在本文中,我们建议考虑预测组合中子模型的多样性,可以采用这些模型协助利用负相关学习在模型集合中选择最多样化的模型子集。有三个公开提供的数据集可用于评价该方法。实验结果不仅显示预测组合中包含负相关学习的子模型的多样性,而且还产生精确度远远超过简单平均基准和某些加权平均方法的预测。此外,通过调整负相关性的罚款强度,预测组合也优于最佳的子模型组合。允许使用此文件的准确性,其价值在于预测的准确性。