Multi-layered network exploration (MuLaNE) problem is an important problem abstracted from many applications. In MuLaNE, there are multiple network layers where each node has an importance weight and each layer is explored by a random walk. The MuLaNE task is to allocate total random walk budget $B$ into each network layer so that the total weights of the unique nodes visited by random walks are maximized. We systematically study this problem from offline optimization to online learning. For the offline optimization setting where the network structure and node weights are known, we provide greedy based constant-ratio approximation algorithms for overlapping networks, and greedy or dynamic-programming based optimal solutions for non-overlapping networks. For the online learning setting, neither the network structure nor the node weights are known initially. We adapt the combinatorial multi-armed bandit framework and design algorithms to learn random walk related parameters and node weights while optimizing the budget allocation in multiple rounds, and prove that they achieve logarithmic regret bounds. Finally, we conduct experiments on a real-world social network dataset to validate our theoretical results.
翻译:多层次网络探索(MuLANE) 问题是一个从许多应用中提取出来的重要问题。 在 MuLANE 中, 存在多个网络层, 每个节点都具有重要重量, 每个层都是随机行走的。 MuLANE 的任务是将随机行走总预算 $B$ 分配给每个网络层, 以便随机行走所访问的独特节点的总重量最大化。 我们系统地从离线优化到在线学习来研究这个问题。 对于已知网络结构和节点重量的离线优化设置, 我们为重叠网络提供基于贪婪的恒定鼠近似算法, 并为非重叠网络提供基于贪婪或动态程序的最佳解决方案。 对于在线学习设置, 网络结构或节点加权最初并不为人所知。 我们调整组合式多臂的多臂带框架和设计算法以学习随机行走相关参数和节点重量, 同时优化多轮预算分配, 并证明它们达到了对数的后退界限。 最后, 我们实验了一个真实世界的社会网络数据集, 以验证我们的理论结果 。