We develop a family of techniques to align word embeddings which are derived from different source datasets or created using different mechanisms (e.g., GloVe or word2vec). Our methods are simple and have a closed form to optimally rotate, translate, and scale to minimize root mean squared errors or maximize the average cosine similarity between two embeddings of the same vocabulary into the same dimensional space. Our methods extend approaches known as Absolute Orientation, which are popular for aligning objects in three-dimensions, and generalize an approach by Smith etal (ICLR 2017). We prove new results for optimal scaling and for maximizing cosine similarity. Then we demonstrate how to evaluate the similarity of embeddings from different sources or mechanisms, and that certain properties like synonyms and analogies are preserved across the embeddings and can be enhanced by simply aligning and averaging ensembles of embeddings.
翻译:我们开发了一组技术来对来自不同源数据集或使用不同机制(如GloVe或Word2vec)创建的单词嵌入进行统一。 我们的方法很简单,具有封闭的形式,可以优化旋转、翻译和缩放,以最大限度地减少根平均值正方差,或者将同一词汇的两个嵌入同一维空间之间的平均相近性最大化。 我们的方法扩展了被称为“绝对方向”的方法,这些方法对三维对象的组合很受欢迎,对Smith etal(ICLR 2017)的一种方法进行了概括化。 我们证明,我们为优化缩放和最大化对焦相似性取得了新结果。 然后我们展示了如何评估不同源或机制嵌入的相似性,以及某些特性,如同义词和模拟,在嵌入层之间得到了保存,并且可以通过简单地对粘合和平均嵌入的酶来增强。