Thanks to their ability to learn flexible data-driven losses, Generative Adversarial Networks (GANs) are an integral part of many semi- and weakly-supervised methods for medical image segmentation. GANs jointly optimise a generator and an adversarial discriminator on a set of training data. After training has completed, the discriminator is usually discarded and only the generator is used for inference. But should we discard discriminators? In this work, we argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes. First, we identify key challenges and suggest possible solutions to make discriminators re-usable at inference. Then, we show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model. Our method is simple and improves the test-time performance of pre-trained GANs. Moreover, we show that it is compatible with standard post-processing techniques and it has potentials to be used for Online Continual Learning. With our work, we open new research avenues for re-using adversarial discriminators at inference.
翻译:由于他们有能力学习灵活的数据驱动损失, 基因反转网络(GANs)是许多半和薄弱监管的医疗图像分割方法的组成部分。 GANs联合优化了生成者和一组培训数据上的对抗歧视者。 培训完成后, 歧视者通常被抛弃, 只有生成者才被用于推断。 但如果我们抛弃歧视者? 在这项工作中, 我们争辩说, 培训稳定歧视者会产生明显的损失功能, 我们可以在推断发现和纠正分割错误时重新使用。 首先, 我们找出关键的挑战, 并提出可能的解决办法, 使歧视者重新使用推断。 然后, 我们表明, 我们可以将歧视者与图像重建成本( 通过解码器)相结合, 进一步改进模型。 我们的方法很简单, 并且提高了接受过培训的GANs的测试时间性能。 此外, 我们证明, 它符合标准后处理技术, 并且有可能用于在线持续学习。 我们通过我们的工作, 打开了新的研究渠道, 来重新使用对抗性歧视者。