Incremental flow-based denoising models have reshaped generative modelling, but their empirical advantage still lacks a rigorous approximation-theoretic foundation. We show that incremental generation is necessary and sufficient for universal flow-based generation on the largest natural class of self-maps of $[0,1]^d$ compatible with denoising pipelines, namely the orientation-preserving homeomorphisms of $[0,1]^d$. All our guarantees are uniform on the underlying maps and hence imply approximation both samplewise and in distribution. Using a new topological-dynamical argument, we first prove an impossibility theorem: the class of all single-step autonomous flows, independently of the architecture, width, depth, or Lipschitz activation of the underlying neural network, is meagre and therefore not universal in the space of orientation-preserving homeomorphisms of $[0,1]^d$. By exploiting algebraic properties of autonomous flows, we conversely show that every orientation-preserving Lipschitz homeomorphism on $[0,1]^d$ can be approximated at rate $\mathcal{O}(n^{-1/d})$ by a composition of at most $K_d$ such flows, where $K_d$ depends only on the dimension. Under additional smoothness assumptions, the approximation rate can be made dimension-free, and $K_d$ can be chosen uniformly over the class being approximated. Finally, by linearly lifting the domain into one higher dimension, we obtain structured universal approximation results for continuous functions and for probability measures on $[0,1]^d$, the latter realized as pushforwards of empirical measures with vanishing $1$-Wasserstein error.
翻译:增量流式去噪模型重塑了生成建模领域,但其经验优势仍缺乏严格的逼近理论基础。我们证明,在去噪流程兼容的最大自然自映射类——即$[0,1]^d$上的保向同胚映射中,增量生成是实现流式普适生成的必要且充分条件。所有理论保证均关于底层映射具有一致性,从而在样本层面和分布层面均实现逼近。通过新的拓扑动力学论证,我们首先证明了一个不可能性定理:无论底层神经网络的结构、宽度、深度或Lipschitz激活函数如何,所有单步自治流构成的映射类在$[0,1]^d$保向同胚空间中是贫集,因此不具备普适性。反之,通过利用自治流的代数性质,我们证明每个$[0,1]^d$上的保向Lipschitz同胚均能以$\mathcal{O}(n^{-1/d})$的速率被最多$K_d$个此类流的复合所逼近,其中$K_d$仅依赖于维度。在附加光滑性假设下,逼近速率可实现与维度无关,且$K_d$可在被逼近的映射类上一致选取。最后,通过将定义域线性提升至高一维空间,我们获得了连续函数及$[0,1]^d$上概率测量的结构化普适逼近结果,其中概率测量可通过经验测度的推前映射实现趋于零的1-Wasserstein误差逼近。