https://www.cell.com/patterns/pdf/S2666-3899(21)00099-4.pdf

### 相关内容

Existing Collaborative Filtering (CF) methods are mostly designed based on the idea of matching, i.e., by learning user and item embeddings from data using shallow or deep models, they try to capture the associative relevance patterns in data, so that a user embedding can be matched with relevant item embeddings using designed or learned similarity functions. However, as a cognition rather than a perception intelligent task, recommendation requires not only the ability of pattern recognition and matching from data, but also the ability of cognitive reasoning in data. In this paper, we propose to advance Collaborative Filtering (CF) to Collaborative Reasoning (CR), which means that each user knows part of the reasoning space, and they collaborate for reasoning in the space to estimate preferences for each other. Technically, we propose a Neural Collaborative Reasoning (NCR) framework to bridge learning and reasoning. Specifically, we integrate the power of representation learning and logical reasoning, where representations capture similarity patterns in data from perceptual perspectives, and logic facilitates cognitive reasoning for informed decision making. An important challenge, however, is to bridge differentiable neural networks and symbolic reasoning in a shared architecture for optimization and inference. To solve the problem, we propose a modularized reasoning architecture, which learns logical operations such as AND ($\wedge$), OR ($\vee$) and NOT ($\neg$) as neural modules for implication reasoning ($\rightarrow$). In this way, logical expressions can be equivalently organized as neural networks, so that logical reasoning and prediction can be conducted in a continuous space. Experiments on real-world datasets verified the advantages of our framework compared with both shallow, deep and reasoning models.

Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.

19+阅读 · 2020年2月14日

66+阅读 · 2019年11月27日

22+阅读 · 2019年8月13日

26+阅读 · 2018年3月30日

Gusi Te,Wei Hu,Yinglu Liu,Hailin Shi,Tao Mei
0+阅读 · 9月15日
Hanxiong Chen,Shaoyun Shi,Yunqi Li,Yongfeng Zhang
8+阅读 · 5月3日
Emily Alsentzer,Samuel G. Finlayson,Michelle M. Li,Marinka Zitnik
20+阅读 · 2020年6月19日
Yuyu Zhang,Xinshi Chen,Yuan Yang,Arun Ramamurthy,Bo Li,Yuan Qi,Le Song
3+阅读 · 2020年2月4日
Nitish Gupta,Kevin Lin,Dan Roth,Sameer Singh,Matt Gardner
9+阅读 · 2019年12月10日
Xiaoran Xu,Wei Feng,Yunsheng Jiang,Xiaohui Xie,Zhiqing Sun,Zhi-Hong Deng
5+阅读 · 2019年9月27日
Bill Yuchen Lin,Xinyue Chen,Jamin Chen,Xiang Ren
7+阅读 · 2019年9月4日
Meng Qu,Jian Tang
6+阅读 · 2019年6月20日
Adam Santoro,Ryan Faulkner,David Raposo,Jack Rae,Mike Chrzanowski,Theophane Weber,Daan Wierstra,Oriol Vinyals,Razvan Pascanu,Timothy Lillicrap
8+阅读 · 2018年6月28日
Wenhu Chen,Wenhan Xiong,Xifeng Yan,William Wang
7+阅读 · 2018年3月17日
Top