Integrating functions on discrete domains into neural networks is key to developing their capability to reason about discrete objects. But, discrete domains are (1) not naturally amenable to gradient-based optimization, and (2) incompatible with deep learning architectures that rely on representations in high-dimensional vector spaces. In this work, we address both difficulties for set functions, which capture many important discrete problems. First, we develop a framework for extending set functions onto low-dimensional continuous domains, where many extensions are naturally defined. Our framework subsumes many well-known extensions as special cases. Second, to avoid undesirable low-dimensional neural network bottlenecks, we convert low-dimensional extensions into representations in high-dimensional spaces, taking inspiration from the success of semidefinite programs for combinatorial optimization. Empirically, we observe benefits of our extensions for unsupervised neural combinatorial optimization, in particular with high-dimensional representations.
翻译:将离散域的功能整合到神经网络中是发展其理解离散物体的能力的关键。 但是,离散域(1) 自然不适于基于梯度的优化,(2) 与依赖高维矢量空间代表的深层学习结构不相容。 在这项工作中,我们解决了设定功能的两种困难,这既反映了许多重要的离散问题。首先,我们开发了一个框架,将设定的功能扩展到低维连续域,许多扩展是自然定义的。我们的框架将许多众所周知的扩展作为特殊案例进行分解。第二,为了避免不可取的低维神经网络瓶颈,我们将低维扩展转换为在高维空间的表达,从半定型组合优化方案的成功中汲取灵感。我们偶然地看到我们的扩展对非超超超神经组合优化的好处,特别是高维表达。