We study approximation of probability measures supported on $n$-dimensional manifolds embedded in R^m by injective flows -- neural networks composed of invertible flows and injective layers. We show that in general, injective flows between R^n and R^m universally approximate measures supported on images of extendable embeddings, which are a subset of standard embeddings: when the embedding dimension m is small, topological obstructions may preclude certain manifolds as admissible targets. When the embedding dimension is sufficiently large, m \geq 3n+1, we use an argument from algebraic topology known as the clean trick to prove that the topological obstructions vanish and injective flows universally approximate any differentiable embedding. Along the way we show that the studied injective flows admit efficient projections on the range, and that their optimality can be established "in reverse," resolving a conjecture made in Brehmer and Cranmer 2020
翻译:我们研究了在R ⁇ m 嵌入的以美元维元元体嵌入的预测流 -- -- 由不可逆流和注入层组成的神经网络。我们发现,一般而言,R ⁇ n和R ⁇ m 之间在可扩展嵌入图像上支持的预测流普遍近似测量量,这是标准嵌入层的子集:当嵌入维维度微小时,地形学障碍可能排除某些可接受目标的元体。当嵌入维度足够大时,我们用被称为清洁诡计的代数表层学的论据来证明,地形障碍和注入流消失,普遍接近任何不同的嵌入流。我们展示的方式是,研究的投入流接受对范围的有效预测,其最佳性可以“反向”确定,解决布列默和克朗默2020年的预测。