Discrete latent variables are considered important for real world data, which has motivated research on Variational Autoencoders (VAEs) with discrete latents. However, standard VAE-training is not possible in this case, which has motivated different strategies to manipulate discrete distributions in order to train discrete VAEs similarly to conventional ones. Here we ask if it is also possible to keep the discrete nature of the latents fully intact by applying a direct discrete optimization for the encoding model. The approach is consequently strongly diverting from standard VAE-training by sidestepping sampling approximation, reparameterization trick and amortization. Discrete optimization is realized in a variational setting using truncated posteriors in conjunction with evolutionary algorithms. For VAEs with binary latents, we (A) show how such a discrete variational method ties into gradient ascent for network weights, and (B) how the decoder is used to select latent states for training. Conventional amortized training is more efficient and applicable to large neural networks. However, using smaller networks, we here find direct discrete optimization to be efficiently scalable to hundreds of latents. More importantly, we find the effectiveness of direct optimization to be highly competitive in `zero-shot' learning. In contrast to large supervised networks, the here investigated VAEs can, e.g., denoise a single image without previous training on clean data and/or training on large image datasets. More generally, the studied approach shows that training of VAEs is indeed possible without sampling-based approximation and reparameterization, which may be interesting for the analysis of VAE-training in general. For `zero-shot' settings a direct optimization, furthermore, makes VAEs competitive where they have previously been outperformed by non-generative approaches.
翻译:对真实世界数据而言,隐形潜伏变量被认为是十分重要的,因为它激发了对具有离散潜伏的 VAE (VAE) 标准 VAE 培训的研究。 但是, 在本案中, 标准 VAE 培训是不可能的, 它激发了不同的战略来操纵离散分布, 以便与常规的 VAE 类似地培训离散的 VAE 。 我们在这里询问, 通过对编码模型应用直接离散优化来保持潜伏的离散性质是否完全完好。 因此, 这种方法正在大大偏离标准的 VAE 培训, 通过侧调采采采采样、 重新校准技巧和摊合。 但是, 在变异设置的 EE 大电路中, 实现更分解优化。 对于带有双向潜伏潜伏潜伏潜伏潜伏潜伏的 VAE, 我们(A) 展示这种离散的变位方法是如何在网络中可以找到的, VAOVP 。