We consider partially-specified optimization problems where the goal is to actively, but efficiently, acquire missing information about the problem in order to solve it. An algorithm designer wishes to solve a linear program (LP), $\max \mathbf{c}^T \mathbf{x}$ s.t. $\mathbf{A}\mathbf{x} \leq \mathbf{b}, \mathbf{x} \ge \mathbf{0}$, but does not initially know some of the parameters. The algorithm can iteratively choose an unknown parameter and gather information in the form of a noisy sample centered at the parameter's (unknown) value. The goal is to find an approximately feasible and optimal solution to the underlying LP with high probability while drawing a small number of samples. We focus on two cases. (1) When the parameters $\mathbf{c}$ of the objective are initially unknown, we take an information-theoretic approach and give roughly matching upper and lower sample complexity bounds, with an (inefficient) successive-elimination algorithm. (2) When the parameters $\mathbf{b}$ of the constraints are initially unknown, we propose an efficient algorithm combining techniques from the ellipsoid method for LP and confidence-bound approaches from bandit algorithms. The algorithm adaptively gathers information about constraints only as needed in order to make progress. We give sample complexity bounds for the algorithm and demonstrate its improvement over a naive approach via simulation.
翻译:我们考虑部分指定的优化问题, 目标是积极但高效地获取缺少的信息, 以便解决这个问题。 算法设计者希望解决线性程序( LP), $\maxbf{c} T\mathbf{x} 美元 s.t. $\mathbf{A ⁇ mathbbf{x}\leq\mathbf{b}}},\mathb{x} {x}, 但它最初并不了解某些参数。 算法设计者希望反复选择一个未知的参数, 并收集以参数( 未知) 值为中心的杂音样本信息。 目标是在提取少量样本的同时找到一个大致可行和最佳的解决方案。 我们只关注两个案例 。 (1) 当参数 $\mathbb{P} {c} 将目标的精度转化为初始的精度方法时, 我们采取信息理论性的方法, 并大致地将精度精度的精度的精度的精度范围绑起来, 。 当我们提出一个未知的精度的精度方法 。