We explain how to use Kolmogorov Superposition Theorem (KST) to break the curse of dimensionality when approximating a dense class of multivariate continuous functions. We first show that there is a class of functions called $K$-Lipschitz continuous in $C([0,1]^d)$ which can be approximated by a special ReLU neural network of two hidden layers with a dimension independent approximation rate $O(n^{-1})$ with approximation constant increasing quadratically in $d$. The number of parameters used in such neural network approximation equals to $(6d+2)n$. Next we introduce KB-splines by using linear B-splines to replace the K-outer function and smooth the KB-splines to have the so-called LKB-splines as the basis for approximation. Our numerical evidence shows that the curse of dimensionality is broken in the following sense: When using the standard discrete least squares (DLS) method to approximate a continuous function, there exists a pivotal set of points in $[0,1]^d$ with size at most $O(nd)$ such that the rooted mean squares error (RMSE) from the DLS based on the pivotal set is similar to the RMSE of the DLS based on the original set with size $O(n^d)$. In addition, by using matrix cross approximation technique, the number of LKB-splines used for approximation is the same as the size of the pivotal data set. Therefore, we do not need too many basis functions as well as too many function values to approximate a high dimensional continuous function $f$.
翻译:暂无翻译