Several approximate inference methods have been proposed for deep discrete latent variable models. However, non-parametric methods which have previously been successfully employed for classical sparse coding models have largely been unexplored in the context of deep models. We propose a non-parametric iterative algorithm for learning discrete latent representations in such deep models. Additionally, to learn scale invariant discrete features, we propose local data scaling variables. Lastly, to encourage sparsity in our representations, we propose a Beta-Bernoulli process prior on the latent factors. We evaluate our spare coding model coupled with different likelihood models. We evaluate our method across datasets with varying characteristics and compare our results to current amortized approximate inference methods.
翻译:已经为深离潜伏变量模型提出了几种近似推论方法,然而,以前在古典稀疏编码模型中成功使用的非参数方法在深极模型中基本上没有得到探讨。我们建议了一种非参数迭代算法,用于在这种深深层模型中学习离散潜在代表物。此外,为了了解大小的离散特性,我们建议了本地数据缩放变量。最后,为了鼓励我们表述中的宽度,我们提议了一种Beta-Bernoulli进程,在潜在因素之前进行。我们评估了我们的备用编码模型,并同时评估了不同的概率模型。我们评估了我们跨越具有不同特性的数据集的方法,并将我们的结果与目前的摊销近推法进行了比较。</s>