Expectation-maximization (EM) is a popular and well-established method for image reconstruction in positron emission tomography (PET) but it often suffers from slow convergence. Ordered subset EM (OSEM) is an effective reconstruction algorithm that provides significant acceleration during initial iterations, but it has been observed to enter a limit cycle. In this work, we investigate two classes of algorithms for accelerating OSEM based on variance reduction for penalised PET reconstructions. The first is a stochastic variance reduced EM algorithm, termed as SVREM, an extension of the classical EM to the stochastic context, by combining classical OSEM with insights from variance reduction techniques for gradient descent. The second views OSEM as a preconditioned stochastic gradient ascent, and applies variance reduction techniques, i.e., SAGA and SVRG, to estimate the update direction. We present several numerical experiments to illustrate the efficiency and accuracy of the approaches. The numerical results show that these approaches significantly outperform existing OSEM type methods for penalised PET reconstructions, and hold great potential.
翻译:期望最大化(EM) 是一种流行且完善的对正电子排放断层成像法(PET) 的图像重建方法,但它往往会受到缓慢的趋同。有秩序的子子集EM(OSEM)是一种有效的重建算法,在初始迭代期间可以大大加速,但人们观察到它进入了一定的周期。在这项工作中,我们调查了两种基于受处罚的PET重建中减少差异的加速欧欧欧的算法。第一个是分层变异法,称为SVREM,将传统的EM扩展至随机环境,将传统的OSEM与梯度下降技术的洞察结合起来。第二个观点认为,OSEM是一种先决条件的变异梯度作为亮度,并应用了减少差异的技术,即SAGA和SVRG,以估计更新方向。我们提出了几项数字实验,以说明这些方法的效率和准确性。数字结果显示,这些方法大大超出现有的OSEM类型的惩罚性PET重建方法,并且具有巨大的潜力。