The topology optimization community has regularly employed nonlinear programming (NLP) algorithms from the operations research community. However, these algorithms are implemented in the real vector space $\mathbb{R}^n$ instead of the proper function space where the design variable resides. In this article, we show how the volume fraction variable discretization on non-uniform meshes affects the convergence of $\mathbb{R}^n$ based NLP algorithms. We do so by first summarizing the functional analysis tools necessary to understand why convergence is affected by the mesh. Namely, the distinction between derivative definitions and the role of the mesh-dependent inner product within the NLP algorithm. These tools are then used to make the Globally Convergent Method of Moving Asymptotes (GCMMA), a popular NLP algorithm in the topology optimization community, converge in a mesh independent fashion when starting from the same initial design. We then benchmark our algorithms with three common problems in topology optimization.
翻译:顶层优化社区定期使用运行研究界的非线性编程算法( NLP) 。 然而, 这些算法是在实际矢量空间 $\ mathb{R ⁇ n$ 而不是设计变量所在的适当功能空间中实施的 。 在本篇文章中, 我们展示了非统一模层的体积分解分解如何影响 $\ mathbb{R ⁇ n$ 基于 NLP 的算法的趋同。 我们首先总结了理解聚合为何受到网格的影响所必需的功能分析工具。 也就是说, 衍生物定义和 NLP 算法中基于网格的内产中基于网格的内产物的作用之间的区别 。 这些工具然后被用来制作全球移动Asymptotes( GCMMA) 的趋同方法( GCMMA), 这是在顶层优化社区流行的NLP 算法, 从最初设计开始, 以网格独立的方式聚合。 我们然后用三个常见的地形优化问题来衡量我们的算法。