Recent researches on unsupervised domain adaptation (UDA) have demonstrated that end-to-end ensemble learning frameworks serve as a compelling option for UDA tasks. Nevertheless, these end-to-end ensemble learning methods often lack flexibility as any modification to the ensemble requires retraining of their frameworks. To address this problem, we propose a flexible ensemble-distillation framework for performing semantic segmentation based UDA, allowing any arbitrary composition of the members in the ensemble while still maintaining its superior performance. To achieve such flexibility, our framework is designed to be robust against the output inconsistency and the performance variation of the members within the ensemble. To examine the effectiveness and the robustness of our method, we perform an extensive set of experiments on both GTA5 to Cityscapes and SYNTHIA to Cityscapes benchmarks to quantitatively inspect the improvements achievable by our method. We further provide detailed analyses to validate that our design choices are practical and beneficial. The experimental evidence validates that the proposed method indeed offer superior performance, robustness and flexibility in semantic segmentation based UDA tasks against contemporary baseline methods.
翻译:最近对未经监督的域适应(UDA)的研究显示,终端到终端共同学习框架是UDA任务的一个令人信服的选择,然而,这些端到终端共同学习方法往往缺乏灵活性,因为对组合的任何修改都需要对其框架进行再培训。为解决这一问题,我们提议为基于UDA进行语义分解建立一个灵活的混合蒸馏框架,允许联合体成员的任何任意组成,同时保持其优异的性能。为了实现这种灵活性,我们的框架旨在针对联合体成员的产出不一致和性能差异而保持稳健。为了审查我们的方法的有效性和稳健性,我们针对当代基线方法,对基于UDA的语义分解任务,对GTA5至城市景点和SYNTHIA进行了一系列广泛的试验,以城市为基准,以定量检查我们的方法可以实现的改进。我们进一步提供了详细分析,以证实我们的设计选择是实用和有益的。实验证据证实,拟议的方法确实为基于UDA的语义分解任务提供了更优的性、稳健和灵活性。