We introduce a joint diffusion model that simultaneously learns meaningful internal representations fit for both generative and predictive tasks. Joint machine learning models that allow synthesizing and classifying data often offer uneven performance between those tasks or are unstable to train. In this work, we depart from a set of empirical observations that indicate the usefulness of internal representations built by contemporary deep diffusion-based generative models in both generative and predictive settings. We then introduce an extension of the vanilla diffusion model with a classifier that allows for stable joint training with shared parametrization between those objectives. The resulting joint diffusion model offers superior performance across various tasks, including generative modeling, semi-supervised classification, and domain adaptation.
翻译:我们引入了一种联合传播模式,同时学习适合基因和预测性任务的有意义的内部表现; 允许对数据进行综合和分类的联合机器学习模式,往往使这些任务之间业绩不均,或培训工作不稳定; 在这项工作中,我们偏离了一套经验性观察,这些观察表明当代基于深度传播的遗传化模式在基因和预测性环境中所建立的内部表现的有用性; 然后我们引入了香草传播模式的延伸,配有一种分类,以便能够进行稳定的联合培训,使这些目标之间具有共同的对称性; 由此产生的联合传播模式为各种任务提供了优异的绩效,包括基因建模、半监督分类和领域调整。