Bayesian optimization works effectively optimizing parameters in black-box problems. However, this method did not work for high-dimensional parameters in limited trials. Parameters can be efficiently explored by nonlinearly embedding them into a low-dimensional space; however, the constraints cannot be considered. We proposed combining parameter decomposition by introducing disentangled representation learning into nonlinear embedding to consider both known equality and unknown inequality constraints in high-dimensional Bayesian optimization. We applied the proposed method to a powder weighing task as a usage scenario. Based on the experimental results, the proposed method considers the constraints and contributes to reducing the number of trials by approximately 66% compared to manual parameter tuning.
翻译:Bayesian优化工作有效地优化了黑箱问题的参数。但是,这种方法在有限的试验中不能用于高维参数。通过将参数非线性地嵌入低维空间,可以有效地探索参数;但是,无法考虑制约因素。我们建议将参数分解纳入非线性嵌入,将分解的代议学习纳入非线性嵌入,以考虑到在高维Bayesian优化中已知的平等和未知的不平等制约。我们将拟议方法应用于粉末称重任务,作为使用假设。根据实验结果,拟议方法考虑了制约因素,并有助于将试验次数比人工参数调试减少约66%。