Kernel logistic regression (KLR) is a powerful classification method widely applied across diverse domains. In many real-world scenarios, indefinite kernels capture more domain-specific structural information than positive definite kernels. This paper proposes a novel $L_1$-norm regularized indefinite kernel logistic regression (RIKLR) model, which extends the existing IKLR framework by introducing sparsity via an $L_1$-norm penalty. The introduction of this regularization enhances interpretability and generalization while introducing nonsmoothness and nonconvexity into the optimization landscape. To address these challenges, a theoretically grounded and computationally efficient proximal linearized algorithm is developed. Experimental results on multiple benchmark datasets demonstrate the superior performance of the proposed method in terms of both accuracy and sparsity.
翻译:核逻辑回归(KLR)是一种广泛应用于多个领域的强大分类方法。在许多实际场景中,不定核比正定核能捕获更多领域特定的结构信息。本文提出了一种新颖的L1范数正则化不定核逻辑回归(RIKLR)模型,该模型通过引入L1范数惩罚项来扩展现有IKLR框架,从而引入稀疏性。这种正则化的引入增强了模型的可解释性和泛化能力,同时也在优化过程中引入了非光滑性和非凸性。为应对这些挑战,本文开发了一种理论严谨且计算高效的近端线性化算法。在多个基准数据集上的实验结果表明,所提方法在准确性和稀疏性方面均表现出优越性能。