Dot product latent space embedding is a common form of representation learning in undirected graphs (e.g. social networks, co-occurrence networks). We show that such models have problems dealing with 'intransitive' situations where A is linked to B, B is linked to C but A is not linked to C. Such situations occur in social networks when opposites attract (heterophily) and in co-occurrence networks when there are substitute nodes (e.g. the presence of Pepsi or Coke, but rarely both, in otherwise similar purchase baskets). We present a simple expansion which we call the attract-repel (AR) decomposition: a set of latent attributes on which similar nodes attract and another set of latent attributes on which similar nodes repel. We demonstrate the AR decomposition in real social networks and show that it can be used to measure the amount of latent homophily and heterophily. In addition, it can be applied to co-occurrence networks to discover roles in teams and find substitutable ingredients in recipes.
翻译:产品隐蔽空间的嵌入是一种常见的形式,在无方向图表(如社交网络、共发网络)中进行代表性学习。我们表明,这些模型在A与B、B与C相关联但A与C没有关联的“不透明”情况下存在问题。 这些情况发生在社交网络中,当相反的吸引(杂交)和共发网络中出现替代节点(如Pepsi或可乐的存在,但很少同时在类似购买篮子中出现)时。 我们展示了一个简单的扩展,我们称之为吸引-反射(AR)分解:一套潜在的属性,即类似的节点吸引的一组潜在属性,以及另一组潜在的潜在属性,即类似的节点反射。我们展示了AR在真实的社会网络中的分解作用,并表明它可用于测量潜伏的同源和异源和异相的数量。此外,它还可以应用于共发网络,以发现团队的角色,并在配方中找到可细分的成分。