Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.
翻译:近些年来,为在领域转移下学习提出了新神经模型,但大多数模型只评价一项单一任务,即专有数据集,或比较薄弱的基线,这使得难以对模型进行比较。在本文中,我们重新评估了在领域转移下的神经网络背景下的典型一般目的靴式方法,与最近的神经系统方法相比,我们提出了新的多任务三角培训方法,以减少经典三重培训的时间和空间复杂性。关于两个基准的广泛实验是负面的:虽然我们的新颖方法为情绪分析确立了新的最新水平,但并不始终是最佳的。更重要的是,我们得出了某种令人惊讶的结论,即传统的三重培训加上一些补充,超越了艺术的状态。我们的结论是,经典方法构成了重要和强大的基线。