The learning and evaluation of energy-based latent variable models (EBLVMs) without any structural assumptions are highly challenging, because the true posteriors and the partition functions in such models are generally intractable. This paper presents variational estimates of the score function and its gradient with respect to the model parameters in a general EBLVM, referred to as VaES and VaGES respectively. The variational posterior is trained to minimize a certain divergence to the true model posterior and the bias in both estimates can be bounded by the divergence theoretically. With a minimal model assumption, VaES and VaGES can be applied to the kernelized Stein discrepancy (KSD) and score matching (SM)-based methods to learn EBLVMs. Besides, VaES can also be used to estimate the exact Fisher divergence between the data and general EBLVMs.
翻译:在没有任何结构性假设的情况下学习和评价基于能源的潜在可变模型(EBLVM)是极具挑战性的,因为这类模型中真实的后子体和分区功能一般难以解决,本文件对通用的EBLVM(分别称为VaES和VaGES)中的得分函数及其梯度作了不同估计,变后体进行了培训,以尽量减少与真实的模型后子体的某种差异,两种估计数的偏差在理论上都可以与差异相联。如果采用最低限度的模型假设,VaES和VaEGES可以适用于内嵌式斯坦差异(KSD)和计分匹配(SM)方法来学习EBLVMs。此外,VAES还可以用来估计数据与通用EBLVMs之间的确切的渔捞差异。