Large bibliographic networks are sparse -- the average node degree is small. This is not necessarily true for their product -- in some cases, it can ``explode'' (it is not sparse, increases in time and space complexity). An approach in such cases is to reduce the complexity of the problem by limiting our attention to a selected subset of important nodes and computing with corresponding truncated networks. The nodes can be selected by different criteria. An option is to consider the most important nodes in the derived network -- nodes with the largest weighted degree. It turns out that the weighted degrees in the derived network can be computed efficiently without computing the derived network itself.
翻译:暂无翻译