This paper proposes a new type of generative model that is able to quickly learn a latent representation without an encoder. This is achieved using empirical Bayes to calculate the expectation of the posterior, which is implemented by initialising a latent vector with zeros, then using the gradient of the log-likelihood of the data with respect to this zero vector as new latent points. The approach has similar characteristics to autoencoders, but with a simpler architecture, and is demonstrated in a variational autoencoder equivalent that permits sampling. This also allows implicit representation networks to learn a space of implicit functions without requiring a hypernetwork, retaining their representation advantages across datasets. The experiments show that the proposed method converges faster, with significantly lower reconstruction error than autoencoders, while requiring half the parameters.
翻译:本文提出一种新的基因模型,能够在没有编码器的情况下快速学习潜在代表物。 实现这一点的办法是利用经验型贝耶斯计算后台的预期值, 其实施方法是先用零位初始潜向矢量,然后将零矢量的数据的日志相似度梯度用作新的潜在点。 这种方法与自动显示器相类似,但结构更简单,并体现在一个允许取样的变式自动计算器中。 这也允许隐含代表器网络在不需要超网络的情况下学习隐含功能的空间,保留其在数据集之间的代表优势。 实验显示,拟议方法的汇合速度快,其重建错误大大低于自动编码器,同时需要一半参数。