We study the balanced $k$-way hypergraph partitioning problem, with a special focus on its practical applications to manycore scheduling. Given a hypergraph on $n$ nodes, our goal is to partition the node set into $k$ parts of size at most $(1+\epsilon)\cdot \frac{n}{k}$ each, while minimizing the cost of the partitioning, defined as the number of cut hyperedges, possibly also weighted by the number of partitions they intersect. We show that this problem cannot be approximated to within a $n^{1/\text{poly} \log\log n}$ factor of the optimal solution in polynomial time if the Exponential Time Hypothesis holds, even for hypergraphs of maximal degree 2. We also study the hardness of the partitioning problem from a parameterized complexity perspective, and in the more general case when we have multiple balance constraints. Furthermore, we consider two extensions of the partitioning problem that are motivated from practical considerations. Firstly, we introduce the concept of hyperDAGs to model precedence-constrained computations as hypergraphs, and we analyze the adaptation of the balanced partitioning problem to this case. Secondly, we study the hierarchical partitioning problem to model hierarchical NUMA (non-uniform memory access) effects in modern computer architectures, and we show that ignoring this hierarchical aspect of the communication cost can yield significantly weaker solutions.
翻译:我们研究平衡的超高频分配问题, 特别侧重于它的实际应用, 以许多核心日程安排。 在对美元节点进行高修时, 我们的目标是将节点分成每个大小的美元部分, 最多( 1 ⁇ epsilon)\ cdot\ frac{n ⁇ k}, 同时尽可能降低分割的成本, 定义为切开的超高端的数量, 可能还用它们交叉的分区数量来加权。 我们显示, 这个问题无法在 ${ 1/\ text{poly}\log\log\log n} 的范围内被近似为最佳解决方案的 $ 。 在多元时间里, 我们的目标是将节点设置为美元, 最多( 1 ⁇ epsilon)\ cd 的节点分割成美元部分, 即使是最高等级的超标2 。 我们还从参数复杂度的角度来研究分割问题的难度, 以及当我们有多重平衡模型的制约时, 更一般的情况。 此外, 我们考虑, 由实际因素驱动的现代分解问题的两种延伸问题。 首先, 我们把超多DAG 概念的概念概念 概念 至模型的 至高阶级递增到模型的平级的平级的平级平级的分流法 。