Self-attention mechanisms have made striking state-of-the-art (SOTA) progress in various sequence learning tasks, standing on the multi-headed dot product attention by attending to all the global contexts at different locations. Through a pseudo information highway, we introduce a gated component self-dependency units (SDU) that incorporates LSTM-styled gating units to replenish internal semantic importance within the multi-dimensional latent space of individual representations. The subsidiary content-based SDU gates allow for the information flow of modulated latent embeddings through skipped connections, leading to a clear margin of convergence speed with gradient descent algorithms. We may unveil the role of gating mechanism to aid in the context-based Transformer modules, with hypothesizing that SDU gates, especially on shallow layers, could push it faster to step towards suboptimal points during the optimization process.
翻译:自留机制在各种顺序学习任务方面取得了惊人的进展,通过关注不同地点的所有全球背景,站在多点产品上关注多点产品,关注不同地点的所有全球背景。我们通过假信息高速公路,引入了门式组件自依赖单位(SDU),配有LSTM制式的格子单元,以补充在个人代表的多维潜在空间内的内部语义重要性。基于内容的辅助工具门允许通过跳过连接调节的潜潜伏嵌入的信息流动,从而导致与梯度下沉算法之间的明显趋同速度差。我们可以公布在基于背景的变压器模块中帮助的格子机制的作用,并假设DUD门,特别是在浅层,可以在优化过程中将其推向次优化点。