Recently CKY-based models show great potential in unsupervised grammar induction thanks to their human-like encoding paradigm, which runs recursively and hierarchically, but requires $O(n^3)$ time-complexity. Recursive Transformer based on Differentiable Trees (R2D2) makes it possible to scale to large language model pre-training even with complex tree encoder by introducing a heuristic pruning method. However, the rule-based pruning approach suffers from local optimum and slow inference issues. In this paper, we fix those issues in a unified method. We propose to use a top-down parser as a model-based pruning method, which also enables parallel encoding during inference. Typically, our parser casts parsing as a split point scoring task, which first scores all split points for a given sentence, and then recursively splits a span into two by picking a split point with the highest score in the current span. The reverse order of the splits is considered as the order of pruning in R2D2 encoder. Beside the bi-directional language model loss, we also optimize the parser by minimizing the KL distance between tree probabilities from parser and R2D2. Our experiments show that our Fast-R2D2 improves performance significantly in grammar induction and achieves competitive results in downstream classification tasks.
翻译:最近基于 CKY 的模型在未经监督的语法感化中显示出巨大的潜力,因为其人为的编码模式是循环和分级的,但需要1O(n)3美元的时间复杂性。基于差异树(R2D2)的再稳定变异器使得有可能通过引入超光速的修饰方法,即使使用复杂的树编码器,也能够进行大语言化的预培训。然而,基于规则的裁剪方法有地方最佳和缓慢的推导问题。在本文中,我们用统一的方法来修正这些问题。我们提议使用上下调的解析器作为基于模型的裁剪切方法,这样也可以在推断过程中进行平行的编码。一般地说,我们的分析器将分数分为一个分数,然后通过引入一种超强的裁剪切法将一个宽分为两个。在目前范围内,分裂的分法的分顺序被认为是在模型R2D2 下下调的下调分顺序,我们用模型来进行下调试算,同时将我们最短的R2 和最短的双向显示我们最短的R2 最短的成绩。