Finding global optima in high-dimensional optimization problems is extremely challenging since the number of function evaluations required to sufficiently explore the search space increases exponentially with its dimensionality. Furthermore, multimodal cost functions render local gradient-based search techniques ineffective. To overcome these difficulties, we propose to trim uninteresting regions of the search space where global optima are unlikely to be found by means of autoencoders, exploiting the lower intrinsic dimensionality of certain cost functions; optima are then searched over lower-dimensional latent spaces. The methodology is tested on benchmark functions and on multiple variations of a structural topology optimization problem, where we show that we can estimate this intrinsic lower dimensionality and based thereon obtain the global optimum at best or superior results compared to established optimization procedures at worst.
翻译:由于充分探索搜索空间所需的功能评价数量随着其维度而成倍增加,因此在高维优化问题中寻找全球opima是极具挑战性的,因为充分探索搜索空间所需的功能评价数量随着其维度而成倍增加。此外,多式联运成本功能使得基于本地梯度的搜索技术无效。为了克服这些困难,我们提议缩小搜索空间的不感兴趣区域,因为全球opima不可能通过自动转换器找到这些区域,利用某些成本功能的较低内在维度;然后在较低维度的潜层空间上搜索Popima。该方法以基准功能和结构表层优化问题的多种变化进行测试,我们在此测试了我们能够估计这种内在的较低维度,并以此为基础,与最坏的既定优化程序相比,最佳或优于最佳的全球最佳结果。