The deep generative model yields an implicit estimator for the unknown distribution or density function of the observation. This paper investigates some statistical properties of the implicit density estimator pursued by VAE-type methods from a nonparametric density estimation framework. More specifically, we obtain convergence rates of the VAE-type density estimator under the assumption that the underlying true density function belongs to a locally H\"{o}lder class. Remarkably, a near minimax optimal rate with respect to the Hellinger metric can be achieved by the simplest network architecture, a shallow generative model with a one-dimensional latent variable. The proof of the main theorem relies on the well-known result from the nonparametric Bayesian literature that a smooth density with a suitably decaying tail can efficiently be approximated by a finite mixture of normal distributions. We also discuss an alternative proof, which offers important insights and suggests a potential extension to structured density estimation.
翻译:暂无翻译