We present a randomized approximation scheme for the permanent of a matrix with nonnegative entries. Our scheme extends a recursive rejection sampling method of Huber and Law (SODA 2008) by replacing the upper bound for the permanent with a linear combination of the subproblem bounds at a moderately large depth of the recursion tree. This method, we call deep rejection sampling, is empirically shown to outperform the basic, depth-zero variant, as well as a related method by Kuck et al. (NeurIPS 2019). We analyze the expected running time of the scheme on random $(0, 1)$-matrices where each entry is independently $1$ with probability $p$. Our bound is superior to a previous one for $p$ less than $1/5$, matching another bound that was known to hold when every row and column has density exactly $p$.
翻译:我们为具有非负条目的永久矩阵提出了一个随机近似方案。 我们的计划扩展了Huber和Law(SODA,2008年)的递录拒绝抽样方法(SODA,2008年),将永久矩阵的上限改为在循环树的中等大深度的子问题线性组合。 我们称之为深度拒绝抽样,从经验上看,这种方法优于基本深度零位变量,以及Kuck等人(NeurIPS,2019年)的一个相关方法。 我们用随机的$(0, 1美元-马特克)分析该计划的预期运行时间,因为每个条目独立为1美元,概率为1美元。 我们的约束优于先前的1美元兑1/5美元, 与当每行和列的密度准确为$P美元时已知的另一种约束相匹配。