The Gumbel-max trick is a method to draw a sample from a categorical distribution, given by its unnormalized (log-)probabilities. Over the past years, the machine learning community has proposed several extensions of this trick to facilitate, e.g., drawing multiple samples, sampling from structured domains, or gradient estimation for error backpropagation in neural network optimization. The goal of this survey article is to present background about the Gumbel-max trick, and to provide a structured overview of its extensions to ease algorithm selection. Moreover, it presents a comprehensive outline of (machine learning) literature in which Gumbel-based algorithms have been leveraged, reviews commonly-made design choices, and sketches a future perspective.
翻译:Gumbel- max 骗术是一种从绝对分布中抽取样本的方法,其依据是其未规范的(log-)概率。在过去几年里,机器学习界提出了若干种这种骗术的扩展,以促进,例如,在神经网络优化中,绘制多个样本,从结构化域取样,或对错误反向分析的梯度估计。本调查文章的目的是介绍 Gumbel- max 骗术的背景,并有条理地概述其扩展范围,以方便算法选择。此外,它全面概述了(机器学习)文献,其中利用了基于 Gumbel 的算法,审查了常见的设计选择,并勾画了未来的视角。