Direct loss minimization is a popular approach for learning predictors over structured label spaces. This approach is computationally appealing as it replaces integration with optimization and allows to propagate gradients in a deep net using loss-perturbed prediction. Recently, this technique was extended to generative models, while introducing a randomized predictor that samples a structure from a randomly perturbed score function. In this work, we learn the variance of these randomized structured predictors and show that it balances better between the learned score function and the randomized noise in structured prediction. We demonstrate empirically the effectiveness of learning the balance between the signal and the random noise in structured discrete spaces.
翻译:将直接损失减到最小是一种在结构化标签空间上学习预测器的流行方法。 这种方法在计算上具有吸引力,因为它能以优化取代整合,并允许利用损失和扰动预测在深网中传播梯度。 最近,这一技术推广到基因模型,同时引入随机预测器,从随机扰动的得分函数中抽取结构结构。 在这项工作中,我们了解到这些随机化结构预测器的差异,并表明它在结构化预测中更好地平衡了学到的得分函数和随机噪声。 我们从经验上证明了在结构化离散空间中学习信号和随机噪声之间平衡的效果。