Simulation optimization (SO) is frequently challenged by noisy evaluations, high computational costs, and complex, multimodal search landscapes. This paper introduces Tabu-Enhanced Simulation Optimization (TESO), a novel metaheuristic framework integrating adaptive search with memory-based strategies. TESO leverages a short-term Tabu List to prevent cycling and encourage diversification, and a long-term Elite Memory to guide intensification by perturbing high-performing solutions. An aspiration criterion allows overriding tabu restrictions for exceptional candidates. This combination facilitates a dynamic balance between exploration and exploitation in stochastic environments. We demonstrate TESO's effectiveness and reliability using an queue optimization problem, showing improved performance compared to benchmarks and validating the contribution of its memory components. Source code and data are available at: https://github.com/bulentsoykan/TESO.
翻译:仿真优化(SO)常面临评估噪声大、计算成本高以及搜索空间复杂多模态等挑战。本文提出禁忌增强仿真优化(TESO),这是一种将自适应搜索与基于记忆的策略相结合的新型元启发式框架。TESO利用短期禁忌列表防止循环搜索并促进多样性,同时通过长期精英记忆对高性能解进行扰动以引导集中搜索。其渴望准则允许对特殊候选解突破禁忌限制。这种组合在随机环境中实现了探索与利用的动态平衡。我们通过排队优化问题验证了TESO的有效性与可靠性,结果表明其性能优于基准方法,并证实了其记忆组件的贡献。源代码与数据可在以下网址获取:https://github.com/bulentsoykan/TESO。