Low discrepancy point sets have been widely used as a tool to approximate continuous objects by discrete ones in numerical processes, for example in numerical integration. Following a century of research on the topic, it is still unclear how low the discrepancy of point sets can go; in other words, how regularly distributed can points be in a given space. Recent insights using optimization and machine learning techniques have led to substantial improvements in the construction of low-discrepancy point sets, resulting in configurations of much lower discrepancy values than previously known. Building on the optimal constructions, we present a simple way to obtain $L_{\infty}$-optimized placement of points that follow the same relative order as an (arbitrary) input set. Applying this approach to point sets in dimensions 2 and 3 for up to 400 and 50 points, respectively, we obtain point sets whose $L_{\infty}$ star discrepancies are up to 25% smaller than those of the current-best sets, and around 50% better than classical constructions such as the Fibonacci set.
翻译:暂无翻译