Computing maximum weight independent sets in graphs is an important NP-hard optimization problem. The problem is particularly difficult to solve in large graphs for which data reduction techniques do not work well. To be more precise, state-of-the-art branch-and-reduce algorithms can solve many large-scale graphs if reductions are applicable. However, if this is not the case, their performance quickly degrades due to branching requiring exponential time. In this paper, we develop an advanced memetic algorithm to tackle the problem, which incorporates recent data reduction techniques to compute near-optimal weighted independent sets in huge sparse networks. More precisely, we use a memetic approach to recursively choose vertices that are likely to be in a large-weight independent set. We include these vertices into the solution, and then further reduce the graph. We show that identifying and removing vertices likely to be in large-weight independent sets opens up the reduction space and speeds up the computation of large-weight independent sets remarkably. Our experimental evaluation indicates that we are able to outperform state-of-the-art algorithms. For example, our algorithm computes the best results among all competing algorithms for 33 out of 35 instances. Thus can be seen as the dominating tool when large weight independent sets need to be computed in~practice.
翻译:在图形中计算最大重量独立数据集是一个重要的 NP- 硬优化问题。 这个问题在大图表中特别难以解决, 因为数据减少技术不成功。 更准确地说, 要更精确、 最先进的分节和减速算法可以解决许多大型图表, 如果适用减量的话。 但是, 如果情况不是这样, 它们的性能会因分流需要指数时间而迅速退化。 在本文中, 我们开发了一种先进的微量算法来解决问题, 其中包括最新的数据减少技术, 以在巨大的稀少的网络中计算接近最佳的加权独立数据集。 更准确地说, 我们用一种计量法来反复选择可能处于大量独立设置的脊椎。 我们将这些脊椎纳入解决方案中, 然后进一步降低图表。 我们显示, 确定和删除可能属于大量独立的脊椎, 将打开减少空间, 并加快大型独立组合的计算速度。 我们的实验性评估显示, 我们能够超越最优化的状态加权独立组合。 例中, 我们的重算法可以用来在最大比例分析中选择第35号 。 例如, 我们的算算算算出整个工具的大规模算方法。