Recently proposed generative models for discrete data, such as Masked Diffusion Models (MDMs), exploit conditional independence approximations to reduce the computational cost of popular Auto-Regressive Models (ARMs), at the price of some bias in the sampling distribution. We study the resulting computation-vs-accuracy trade-off, providing general error bounds (in relative entropy) that depend only on the average number of tokens generated per iteration and are independent of the data dimensionality (i.e. sequence length), thus supporting the empirical success of MDMs. We then investigate the gain obtained by using non-constant schedule sizes (i.e. varying the number of unmasked tokens during the generation process) and identify the optimal schedule as a function of a so-called information profile of the data distribution, thus allowing for a principled optimization of schedule sizes. We define methods directly as sampling algorithms and do not use classical derivations as time-reversed diffusion processes, leading us to simple and transparent proofs.
翻译:近期提出的离散数据生成模型,如掩码扩散模型(MDMs),利用条件独立性近似来降低流行自回归模型(ARMs)的计算成本,代价是在采样分布中引入一定偏差。我们研究了由此产生的计算与精度权衡问题,给出了仅取决于每次迭代生成的平均标记数量且与数据维度(即序列长度)无关的通用误差界(以相对熵度量),从而为MDMs的实证成功提供了理论支持。随后,我们研究了使用非恒定调度规模(即在生成过程中改变未掩码标记的数量)所带来的增益,并根据数据分布的所谓信息剖面确定了最优调度策略,从而为调度规模的优化提供了理论依据。我们将方法直接定义为采样算法,而非采用传统的时间反转扩散过程推导方式,这使得证明过程更为简洁明了。