Learning data representations that capture task-related features, but are invariant to nuisance variations remains a key challenge in machine learning. We introduce an automated Bayesian inference framework, called AutoBayes, that explores different graphical models linking classifier, encoder, decoder, estimator and adversarial network blocks to optimize nuisance-invariant machine learning pipelines. AutoBayes also enables learning disentangled representations, where the latent variable is split into multiple pieces to impose various relationships with the nuisance variation and task labels. We benchmark the framework on several public datasets, and provide analysis of its capability for subject-transfer learning with/without variational modeling and adversarial training. We demonstrate a significant performance improvement with ensemble learning across explored graphical models.
翻译:收集任务相关特征的学习数据表述,但却对干扰性差异有不同影响,这仍然是机器学习中的一个关键挑战。我们引入了一个自动的贝叶斯推论框架,称为AutoBayes,用于探索将分类器、编码器、解码器、估计器和对抗性网络块联系起来的不同图形模型,以优化扰动性变化机器学习管道。AutoBayes还有助于学习分解的表述,其中潜在变量被分成多个部分,以强制建立与扰动变异和任务标签的各种关系。我们将该框架以若干公共数据集为基准,并分析其用/无变异型模型和对抗性培训进行主题转移学习的能力。我们展示了显著的绩效改进,在探索的图形模型中进行了共同学习。