Sampling from an unnormalized target distribution is an essential problem with many applications in probabilistic inference. Stein Variational Gradient Descent (SVGD) has been shown to be a powerful method that iteratively updates a set of particles to approximate the distribution of interest. Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem. A natural question then arises: "Can we derive a probabilistic version of the multi-objective optimization?". To answer this question, we propose Stochastic Multiple Target Sampling Gradient Descent (MT-SGD), enabling us to sample from multiple unnormalized target distributions. Specifically, our MT-SGD conducts a flow of intermediate distributions gradually orienting to multiple target distributions, which allows the sampled particles to move to the joint high-likelihood region of the target distributions. Interestingly, the asymptotic analysis shows that our approach reduces exactly to the multiple-gradient descent algorithm for multi-objective optimization, as expected. Finally, we conduct comprehensive experiments to demonstrate the merit of our approach to multi-task learning.
翻译:未经调整的目标分布抽样是许多应用在概率推论下的一个基本问题。 Stein Variational Gradient Emple (SVGD) 已证明是一个强大的方法,可以反复更新一组粒子,以大致显示利益分布。此外,在分析其无特征特性时, SVGD 将完全降为单一目标优化问题,可被视为单一目标优化问题的概率版本。然后自然出现一个问题:“我们能否得出多目标优化的概率版本?” 。为了回答这个问题,我们建议多目标采集源(MT-SGD)是一个强大的方法,使我们能够从多个非正常的目标分布中取样。具体地说,我们的MT-SGD 进行一个中间分布流,逐渐或转向多个目标分布,让抽样粒子能够移动到目标分布的共同高类似区域。有趣的是,“我们的方法将精确地降低到多位位位位位位位的位位位位取位算法,以进行多目标优化。最后,我们想必会学习多目标优化。