Adversarial training based on the maximum classifier discrepancy between the two classifier structures has achieved great success in unsupervised domain adaptation tasks for image classification. The approach adopts the structure of two classifiers, though simple and intuitive, the learned classification boundary may not well represent the data property in the new domain. In this paper, we propose to extend the structure to multiple classifiers to further boost its performance. To this end, we propose a very straightforward approach to adding more classifiers. We employ the principle that the classifiers are different from each other to construct a discrepancy loss function for multiple classifiers. Through the loss function construction method, we make it possible to add any number of classifiers to the original framework. The proposed approach is validated through extensive experimental evaluations. We demonstrate that, on average, adopting the structure of three classifiers normally yields the best performance as a trade-off between the accuracy and efficiency. With minimum extra computational costs, the proposed approach can significantly improve the original algorithm.
翻译:基于两个分类结构之间最大分类差异的双向培训在未经监督的图像分类领域调整任务方面取得了巨大成功。 这种方法采用了两个分类结构, 尽管简单和直观, 所学的分类边界也许不能很好地代表新领域的数据属性。 在本文件中, 我们提议将结构扩展至多个分类者, 以进一步提高其性能。 为此, 我们提出了增加更多分类者的一个非常直截了当的方法。 我们采用了分类者彼此不同的原则, 为多个分类者构建一个差异损失函数。 通过损失函数构建方法, 我们有可能将任何数量的分类者添加到原始框架。 提议的方法通过广泛的实验性评估得到验证。 我们显示, 平均而言, 采用三个分类者的结构通常能产生最佳的性能, 在准确性和效率之间实现平衡。 在最低的计算成本下, 拟议的方法可以大大改进原始算法 。