Algorithms typically come with tunable parameters that have a considerable impact on the computational resources they consume. Too often, practitioners must hand-tune the parameters, a tedious and error-prone task. A recent line of research provides algorithms that return nearly-optimal parameters from within a finite set. These algorithms can be used when the parameter space is infinite by providing as input a random sample of parameters. This data-independent discretization, however, might miss pockets of nearly-optimal parameters: prior research has presented scenarios where the only viable parameters lie within an arbitrarily small region. We provide an algorithm that learns a finite set of promising parameters from within an infinite set. Our algorithm can help compile a configuration portfolio, or it can be used to select the input to a configuration algorithm for finite parameter spaces. Our approach applies to any configuration problem that satisfies a simple yet ubiquitous structure: the algorithm's performance is a piecewise constant function of its parameters. Prior research has exhibited this structure in domains from integer programming to clustering.
翻译:解算法通常带有对所消耗的计算资源有重大影响的可金枪鱼参数。 执业者往往必须手动调试参数, 这是一种乏味和易出错的任务。 最近一行的研究提供了从有限一组内返回近最佳参数的算法。 当参数空间无穷时, 可以通过提供随机参数样本来使用这些算法。 然而, 这种数据独立的离散可能忽略了近于最佳参数的区块: 先前的研究已经展示了唯一可行的参数存在于任意小区域内的假想。 我们提供一种算法, 从无限集中学习一组有希望的有限参数。 我们的算法可以帮助编集组合组合, 或者用来选择输入到限定参数空间的配置算法。 我们的方法适用于满足简单但无所不在的结构的任何配置问题: 算法的性能是其参数的零碎常函数。 先前的研究在从整式编程到组合的域中展示了这种结构。