Genetic Programming (GP), an evolutionary learning technique, has multiple applications in machine learning such as curve fitting, data modelling, feature selection, classification etc. GP has several inherent parallel steps, making it an ideal candidate for GPU based parallelization. This paper describes a GPU accelerated stack-based variant of the generational GP algorithm which can be used for symbolic regression and binary classification. The selection and evaluation steps of the generational GP algorithm are parallelized using CUDA. We introduce representing candidate solution expressions as prefix lists, which enables evaluation using a fixed-length stack in GPU memory. CUDA based matrix vector operations are also used for computation of the fitness of population programs. We evaluate our algorithm on synthetic datasets for the Pagie Polynomial (ranging in size from $4096$ to $16$ million points), profiling training times of our algorithm with other standard symbolic regression libraries viz. gplearn, TensorGP and KarooGP. In addition, using $6$ large-scale regression and classification datasets usually used for comparing gradient boosting algorithms, we run performance benchmarks on our algorithm and gplearn, profiling the training time, test accuracy, and loss. On an NVIDIA DGX-A100 GPU, our algorithm outperforms all the previously listed frameworks, and in particular, achieves average speedups of $119\times$ and $40\times$ against gplearn on the synthetic and large scale datasets respectively.
翻译:GPG是一种进化学习技术,在机器学习中具有多种应用,如曲线安装、数据建模、特征选择、分类等。GP具有若干内在的平行步骤,使它成为基于GPU的平行化的理想候选对象。本文描述了GPU代代GP算法的加速堆叠变式,可用于象征性回归和二元分类。代GP算法的选择和评价步骤使用CUDA进行平行化。我们将候选人解决方案的选用和评价步骤作为前缀列表,这样可以使用固定长度的GPU记忆堆进行评估。CUDA基于矩阵的矢量操作也被用于计算人口方案的适合性。我们评估了我们关于Pagie Polynamial的合成数据集的算法(规模从40.96美元到1,600万美元点不等),用其他标准的象征性回归图书馆(例如:pelearn、TesorGGGGPG和KarooG。此外,通常用于比较梯级推算法的6美元大规模回归和分类数据集。CUDA的矩阵和GPRO-PRO-PLA平均速度框架的绩效基准,我们对照了我们所有的算和GDFA-C-C-C-C-C-C-C-Cxx-Cxxxx-C-C-Cx-Cx-Cx-S-Cx-S-Slx-Cx-S-S-S-S-S-S-S-Cxx-S-S-S-S-S-S-S-S-S-x-xx-x-xxxxxx-x-xxxxxxxxx-x-x-x-xxxxxxxxxx-x-xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx平均平均平均平均速度框架的进度框架的进度和Sx的进度和Sx的进度和Sx的进度和Sx的进度和Sx标准的进度框架的进度和具体测试、前测试、前