Variational Auto-Encoder (VAE) has become the de-facto learning paradigm in achieving both representation learning and generation for natural language. However, existing VAE-based language models either employ elementary RNNs, which is not powerful to handle multi-tasks, or fine-tunes two pre-trained language models (PLMs) for any downstream task, which requires huge energy consumption. In this paper, we introduce the first VAE framework empowered with adaptive GPT-2s (AdaVAE). Different from mentioned systems, we unify both the encoder and decoder of VAE model using GPT-2s with adaptive parameter-efficient components. Experiments from multiple dimensions validate that AdaVAE is competent to better organize language in generation and representation modeling, even with less than $15\%$ additionally activated parameters during training. Our code is available at \url{https://github.com/ImKeTT/adavae}.
翻译:然而,基于 VAE 的现有语言模式要么采用初级RNN(该RNN对处理多任务不力),要么微调两种经过培训的语文模式(PLM),用于任何需要大量能源消耗的下游任务。在本文件中,我们引入了第一个具有适应性GPT-2(AdaVAE)授权的VAE框架。与上述系统不同的是,我们将使用GPT-2s的VAE模型的编码器和解码器与适应性参数效率部分统一起来。从多个层面进行的实验证实ADAVAE有能力更好地组织语言的生成和代号模型,即使培训期间额外启用的参数不到15,000美元。我们的代码可在以下https://github.com/IMKETT/adavae}查阅。