The well-known Gumbel-Max Trick for sampling elements from a categorical distribution (or more generally a non-negative vector) and its variants have been widely used in areas such as machine learning and information retrieval. To sample a random element $i$ in proportion to its positive weight $v_i$, the Gumbel-Max Trick first computes a Gumbel random variable $g_i$ for each positive weight element $i$, and then samples the element $i$ with the largest value of $g_i+\ln v_i$. Recently, applications including similarity estimation and weighted cardinality estimation require to generate $k$ independent Gumbel-Max variables from high dimensional vectors. However, it is computationally expensive for a large $k$ (e.g., hundreds or even thousands) when using the traditional Gumbel-Max Trick. To solve this problem, we propose a novel algorithm, FastGM, which reduces the time complexity from $O(kn^+)$ to $O(k \ln k + n^+)$, where $n^+$ is the number of positive elements in the vector of interest. FastGM stops the procedure of Gumbel random variables computing for many elements, especially for those with small weights. We perform experiments on a variety of real-world datasets and the experimental results demonstrate that FastGM is orders of magnitude faster than state-of-the-art methods without sacrificing accuracy or incurring additional expenses.
翻译:众所周知的 Gumbel- Max Trick 用于从绝对分布( 或更一般地说是一个非负向矢量) 中取样元素的 Gumbel- Max Trick 及其变种的著名 Gumbel- Max Trick 在诸如机器学习和信息检索等领域广泛使用。 要根据一个随机元素的正重比例抽取美元 $v_ i$, Gumbel- Max Trick 首次计算一个 Gumbel 随机变量 $g_ i 美元 美元 美元, 然后再以最大值 $_ i ⁇ un v_ i $ 。 最近, 应用包括相似性估算和加权基度估算, 需要从高维矢量矢量中产生独立的 Gumbel- Max变量。 然而,当使用传统的 Gumbel- Max Trick 元素的随机变量, 我们建议一种新型算法, 将时间复杂性从 $( k\ k ) 到 $ ( k + n + n ) $) 美元 的 基度估算值 来产生 独立美元 独立的 Gumbelbeleval 运算算算算算数, 中, 。