Monotonic neural networks have recently been proposed as a way to define invertible transformations. These transformations can be combined into powerful autoregressive flows that have been shown to be universal approximators of continuous probability distributions. Architectures that ensure monotonicity typically enforce constraints on weights and activation functions, which enables invertibility but leads to a cap on the expressiveness of the resulting transformations. In this work, we propose the Unconstrained Monotonic Neural Network (UMNN) architecture based on the insight that a function is monotonic as long as its derivative is strictly positive. In particular, this latter condition can be enforced with a free-form neural network whose only constraint is the positiveness of its output. We evaluate our new invertible building block within a new autoregressive flow (UMNN-MAF) and demonstrate its effectiveness on density estimation experiments. We also illustrate the ability of UMNNs to improve variational inference.
翻译:最近有人提议单体神经网络作为定义不可逆变的方法。 这些变异可以合并成强大的自动递减流, 事实证明这些变异是连续概率分布的通用近似体。 确保单体性的结构通常对重量和激活功能施加限制, 这使得变异不可视, 但导致由此而来的变异的表达力受限。 在这项工作中, 我们提议无限制的单体神经网络( UMNN) (UMNN) 结构, 其依据是一个函数是单体的洞察力, 只要其衍生物是绝对正数的。 特别是, 后一种条件可以通过自由成型神经网络实施, 其唯一的制约是其输出的积极性。 我们评估我们在新的自动递减流( UMNN-MAF) 中新的不可逆的建筑块, 并展示其在密度估计实验上的有效性。 我们还说明 UMNNS 改进变推论的能力 。