In the field of machine learning, traditional regularization methods generally tend to directly add regularization terms to the loss function. This paper introduces the "Lai loss", a novel loss design that integrates the regularization terms (gradient component) into the traditional loss function through a straightforward geometric ideation. This design innovatively penalizes the gradient vectors through the loss, effectively controlling the model's smoothness and offering the dual benefits of reducing overfitting and avoiding underfitting. Subsequently, we proposed a random sampling method that successfully addresses the challenges associated with its application under large sample conditions. We conducted preliminary experiments using publicly available datasets from Kaggle, demonstrating that the design of Lai loss can control the model's smoothness while ensuring maximum accuracy.
翻译:暂无翻译