Consider an ensemble of $k$ individual classifiers whose accuracies are known. Upon receiving a test point, each of the classifiers outputs a predicted label and a confidence in its prediction for this particular test point. In this paper, we address the question of whether we can determine the accuracy of the ensemble. Surprisingly, even when classifiers are combined in the statistically optimal way in this setting, the accuracy of the resulting ensemble classifier cannot be computed from the accuracies of the individual classifiers-as would be the case in the standard setting of confidence weighted majority voting. We prove tight upper and lower bounds on the ensemble accuracy. We explicitly construct the individual classifiers that attain the upper and lower bounds: specialists and generalists. Our theoretical results have very practical consequences: (1) If we use ensemble methods and have the choice to construct our individual (independent) classifiers from scratch, then we should aim for specialist classifiers rather than generalists. (2) Our bounds can be used to determine how many classifiers are at least required to achieve a desired ensemble accuracy. Finally, we improve our bounds by considering the mutual information between the true label and the individual classifier's output.
翻译:想象一下,即使分类者以统计上的最佳方式组合在一起,结果的混合分类器的准确性也无法从单个分类者的能力中计算出来,正如个人分类者在确定信任加权多数投票的标准中的情况那样。我们发现,每个分类者在测试点上方和下方都有一个预测标签,并且对这个特定测试点的预测有信心。在本文中,我们讨论的是我们能否确定共同点的准确性的问题。奇怪的是,即使在这个设置中,分类者以统计上的最佳方式结合了分类者,结果产生的混合分类器的准确性也不能从单个分类者的能力中计算出来。在确定信任加权多数投票的标准时,每个分类者将产生一个预测值的高度和下方的界限。我们证明共同点的准确性是紧凑的。我们明确建造了达到共同点和下方界限的单个分类者:专家和通才。我们的理论结果具有非常实际的后果:(1) 如果我们使用共同的混合方法,并且选择从抓起构建我们个人(独立)分类器,那么我们的目标应该是专门分类者,而不是一般分类者。 (2)我们的范围可以用来确定有多少分类者最起码需要达到理想的定值和相互分类结果。最后,我们通过考虑相互的标签来改进我们的信息。