Recent advances in Large Language Models (LLMs) have opened new perspectives for automation in optimization. While several studies have explored how LLMs can generate or solve optimization models, far less is understood about what these models actually learn regarding problem structure or algorithmic behavior. This study investigates how LLMs internally represent combinatorial optimization problems and whether such representations can support downstream decision tasks. We adopt a twofold methodology combining direct querying, which assesses LLM capacity to explicitly extract instance features, with probing analyses that examine whether such information is implicitly encoded within their hidden layers. The probing framework is further extended to a per-instance algorithm selection task, evaluating whether LLM-derived representations can predict the best-performing solver. Experiments span four benchmark problems and three instance representations. Results show that LLMs exhibit moderate ability to recover feature information from problem instances, either through direct querying or probing. Notably, the predictive power of LLM hidden-layer representations proves comparable to that achieved through traditional feature extraction, suggesting that LLMs capture meaningful structural information relevant to optimization performance.
翻译:大语言模型(LLMs)的最新进展为优化自动化开辟了新视角。尽管已有研究探索了LLMs如何生成或求解优化模型,但对于这些模型在问题结构或算法行为方面实际学习到的内容,理解尚浅。本研究探究了LLMs如何内部表征组合优化问题,以及此类表征是否能够支持下游决策任务。我们采用双重方法:结合直接查询(评估LLMs显式提取实例特征的能力)与探测分析(检验此类信息是否隐式编码于其隐藏层中)。该探测框架进一步扩展至按实例的算法选择任务,评估LLM衍生的表征能否预测性能最佳的求解器。实验涵盖四个基准问题与三种实例表征。结果表明,LLMs通过直接查询或探测均表现出中等程度的从问题实例中恢复特征信息的能力。值得注意的是,LLM隐藏层表征的预测能力与传统特征提取方法相当,表明LLMs捕获了与优化性能相关的有意义的结构信息。