Evolutionary algorithms (EAs) are a sort of nature-inspired metaheuristics, which have wide applications in various practical optimization problems. In these problems, objective evaluations are usually inaccurate, because noise is almost inevitable in real world, and it is a crucial issue to weaken the negative effect caused by noise. Sampling is a popular strategy, which evaluates the objective a couple of times, and employs the mean of these evaluation results as an estimate of the objective value. In this work, we introduce a novel sampling method, median sampling, into EAs, and illustrate its properties and usefulness theoretically by solving OneMax, the problem of maximizing the number of 1s in a bit string. Instead of the mean, median sampling employs the median of the evaluation results as an estimate. Through rigorous theoretical analysis on OneMax under the commonly used onebit noise, we show that median sampling reduces the expected runtime exponentially. Next, through two special noise models, we show that when the 2-quantile of the noisy fitness increases with the true fitness, median sampling can be better than mean sampling; otherwise, it may fail and mean sampling can be better. The results may guide us to employ median sampling properly in practical applications.
翻译:进化算法(EAs)是一种自然激励的光学算法,在各种实际优化问题中具有广泛应用性。在这些问题中,客观评价通常不准确,因为在现实世界中噪音几乎不可避免,是削弱噪音造成的负面影响的一个关键问题。抽样是一种受欢迎的战略,它评估目标几次,并使用这些评价结果的平均值来估计客观价值。在这项工作中,我们引入了一种新的抽样方法,即中位抽样,进入EAs,并从理论上通过解决 " OneMax " 来说明其属性和实用性。 " OneMax ",即将1个数在一小字符串中最大化的问题。中位抽样使用评价结果的中位作为估计值。通过对通常使用的一比特噪音下的 OneMax 进行严格的理论分析,我们表明中位抽样会急剧减少预期的运行时间。接下来,通过两个特殊的噪音模型,我们发现当噪音的2等量增加时,中位抽样可以比中位抽样好;否则,它可能失败,意味着取样会比中位数更好。