Variational Auto-Encoder (VAE) has become the de-facto learning paradigm in achieving both representation learning and generation for natural language. However, existing VAE-based language models either employ elementary RNNs, which is not powerful to handle complex situations, or fine-tunes two pre-trained language models (PLMs) for any downstream task, which is a huge drain on resources. In this paper, we introduce the first VAE framework empowered with adaptive GPT-2s (AdaVAE). Different from existing systems, we unify both the encoder\&decoder of VAE model using GPT-2s with adaptive parameter-efficient components. Experiments from multiple dimensions validate that AdaVAE is competent to better organize language in generation task and representation modeling, even with less than $15\%$ activated parameters in training. Our code is available at \url{https://github.com/ImKeTT/adavae}.
翻译:然而,基于VAE的现有语言模式要么采用处理复杂情况能力不大的初级RNN(处理复杂情况的能力不强),要么为任何下游任务微调两种预先培训的语言模式(PLM)(PLM),这是资源的巨大消耗。在本文件中,我们引入了第一个具有适应性GPT-2(AdaVAE)授权的VAE框架。与现有系统不同的是,我们将使用GPT-2(GPT-2)的VAE模式的编码器与适应性参数效率部分统一起来。从多个层面进行的实验证实ADAVAE有能力更好地组织语言的生成任务和代表性模型,即使培训中的激活参数不到15,000美元。我们的代码可在<url{https://github.com/IMKETT/adavae}查阅。