Sign Language Production (SLP) aims to translate spoken languages into sign sequences automatically. The core process of SLP is to transform sign gloss sequences into their corresponding sign pose sequences (G2P). Most existing G2P models usually perform this conditional long-range generation in an autoregressive manner, which inevitably leads to an accumulation of errors. To address this issue, we propose a vector quantized diffusion method for conditional pose sequences generation, called PoseVQ-Diffusion, which is an iterative non-autoregressive method. Specifically, we first introduce a vector quantized variational autoencoder (Pose-VQVAE) model to represent a pose sequence as a sequence of latent codes. Then we model the latent discrete space by an extension of the recently developed diffusion architecture. To better leverage the spatial-temporal information, we introduce a novel architecture, namely CodeUnet, to generate higher quality pose sequence in the discrete space. Moreover, taking advantage of the learned codes, we develop a novel sequential k-nearest-neighbours method to predict the variable lengths of pose sequences for corresponding gloss sequences. Consequently, compared with the autoregressive G2P models, our model has a faster sampling speed and produces significantly better results. Compared with previous non-autoregressive G2P methods, PoseVQ-Diffusion improves the predicted results with iterative refinements, thus achieving state-of-the-art results on the SLP evaluation benchmark.
翻译:SLP 的核心过程是将信号光谱序列转换成相应的符号显示序列。大多数现有的G2P模型通常以自动递增的方式进行这种有条件的长距离生成,这不可避免地会导致错误的积累。为了解决这一问题,我们提议了一种有条件的配置音频序列生成的矢量量化扩散方法,称为PoseVQ-Difmissulation,这是一种迭接的非反向性方法。具体地说,我们首先采用了一种矢量四分化变异coder (Pose-VQVVAE) 模型,以代表一个配置序列,作为潜值代码序列。然后我们通过扩展最近开发的扩展扩散结构来模拟隐含的离散空间。为了更好地利用空间时空信息,我们引入了一种新颖的结构,即CodUnet,以在离析空间生成更高质量的组合序列序列序列序列。此外,我们利用所学的代码,开发了一种新型的 K-近邻调变色变色变色变色变色变色变色变色序列方法,从而用以前的G2号变色变色变色变色序列结果。