Thanks to their ability to learn flexible data-driven losses, Generative Adversarial Networks (GANs) are an integral part of many semi- and weakly-supervised methods for medical image segmentation. GANs jointly optimise a generator and an adversarial discriminator on a set of training data. After training is complete, the discriminator is usually discarded, and only the generator is used for inference. But should we discard discriminators? In this work, we argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and \textit{correct} segmentation mistakes. First, we identify key challenges and suggest possible solutions to make discriminators re-usable at inference. Then, we show that we can combine discriminators with image reconstruction costs (via decoders) to endow a causal perspective to test-time training and further improve the model. Our method is simple and improves the test-time performance of pre-trained GANs. Moreover, we show that it is compatible with standard post-processing techniques and it has the potential to be used for Online Continual Learning. With our work, we open new research avenues for re-using adversarial discriminators at inference. Our code is available at https://vios-s.github.io/adversarial-test-time-training.
翻译:由于他们有能力学习灵活的数据驱动损失, 基因反向网络(GANs)是许多半和薄弱监管的医学图像分割方法的组成部分。 GANs联合优化一个生成者和一组培训数据上的对抗性歧视者。 培训完成后, 歧视者通常被抛弃, 并且只有生成者才用来推断。 但是, 我们是否抛弃歧视者? 在这项工作中, 我们争辩说, 培训稳定歧视者会产生明显的损失功能, 我们可以在推断中再次使用这些功能来发现和/ textit{纠正}分割错误。 首先, 我们确定关键的挑战, 并提出可能的解决办法, 使歧视者在一系列培训数据上重新使用。 然后, 我们显示, 我们可以将歧视者与图像重建成本( 通过解析器)结合起来, 以便从因果关系的角度出发测试时间培训, 并进一步改进模型。 我们的方法很简单, 改进了预先培训的GANs的测试时间性表现。 此外, 我们表明, 我们的后处理技术符合标准, 并且有可能在在线持续研究中使用开放性研究渠道。 我们的在线直接学习工作。 。