Constructing abstract interpreters that provide global soundness guarantees remains a major obstacle in abstract interpretation. We investigate whether modern LLMs can reduce this burden by leveraging them to synthesize sound, non-trivial abstract interpreters across multiple abstract domains in the setting of neural network verification. We formulate synthesis as a constrained optimization problem and introduce a novel mathematically grounded cost function for measuring unsoundness under strict syntactic and semantic constraints. Based on this formulation, we develop a unified framework that unifies LLM-based generation with syntactic and semantic validation and a quantitative cost-guided feedback mechanism. Empirical results demonstrate that our framework not only matches the quality of handcrafted transformers, but more importantly, discovers sound, high-precision transformers for complex nonlinear operators that are absent from existing literature.
翻译:构建具备全局可靠性保证的抽象解释器仍然是抽象解释领域的主要障碍。本研究探讨现代大语言模型能否通过将其应用于神经网络验证场景中,综合出跨多个抽象域的可靠、非平凡抽象解释器,从而减轻这一负担。我们将综合问题形式化为约束优化问题,并引入一种基于数学严格性、在严格句法和语义约束下衡量不可靠性的新型成本函数。基于此形式化框架,我们开发了一个统一框架,将基于大语言模型的生成与句法语义验证及定量成本引导反馈机制相结合。实证结果表明,我们的框架不仅能达到手工设计转换器的质量水平,更重要的是,能够发现针对复杂非线性算子的可靠、高精度转换器,这些算子在现有文献中尚未被涵盖。