Variational autoencoders (VAEs) are deep probabilistic models that are used in scientific applications. Many works try to mitigate this problem from the probabilistic methods perspective by new inference techniques or training procedures. In this paper, we approach the problem instead from the deep learning perspective by investigating the effectiveness of using synthetic data and overparameterization for improving the generalization performance. Our motivation comes from (1) the recent discussion on whether the increasing amount of publicly accessible synthetic data will improve or hurt currently trained generative models; and (2) the modern deep learning insights that overparameterization improves generalization. Our investigation shows how both training on samples from a pre-trained diffusion model, and using more parameters at certain layers are able to effectively mitigate overfitting in VAEs, therefore improving their generalization, amortized inference, and robustness performance. Our study provides timely insights in the current era of synthetic data and scaling laws.
翻译:暂无翻译