Mixture-of-Experts (MoE) architectures achieve parameter efficiency through conditional computation, yet contemporary designs suffer from two fundamental limitations: structural parameter isolation that causes catastrophic forgetting, and instruction-overfitting that degrades performance in instruction-free scenarios. We propose CDSP-MoE (Conflict-Driven Subspace Pruning MoE), a framework that addresses these issues through a paradigm shift from isolated expert containers to dynamic expert instantiation within a shared physical subspace. Grounded in the Universal Weight Subspace Hypothesis, CDSP-MoE maintains a super-complete parameter backbone where logical experts are carved out via learnable topology masks. Unlike prior work that uses gradient conflict for token reassignment or optimization surgery, we leverage it as a structural supervisory signal: a Lagged Gradient Game penalizes interfering connections in the shared manifold, enabling the topology to spontaneously prune conflicting pathways and evolve interpretable modular structures. Experimental results demonstrate that CDSP-MoE achieves robust content-driven routing without human-defined task labels, maintaining semantic specialization even under strict blind inference protocols where explicit instructions are absent. Code is available at: https://github.com/konodiodaaaaa1/Conflict-Driven-Subspace-Pruning-Mixture-of-Experts


翻译:专家混合(MoE)架构通过条件计算实现参数高效性,然而当前设计存在两个根本性局限:结构性参数隔离导致灾难性遗忘,以及指令过拟合在无指令场景下性能退化。我们提出CDSP-MoE(冲突驱动子空间剪枝MoE),该框架通过从隔离的专家容器范式转向共享物理子空间内的动态专家实例化,系统性地解决了这些问题。基于通用权重子空间假说,CDSP-MoE维护一个超完备参数主干网络,其中逻辑专家通过可学习的拓扑掩码进行刻画。与先前利用梯度冲突进行令牌重分配或优化手术的研究不同,我们将其作为结构性监督信号:滞后梯度博弈机制对共享流形中的干扰连接施加惩罚,使拓扑结构能自发剪除冲突路径并演化出可解释的模块化结构。实验结果表明,CDSP-MoE在无需人工定义任务标签的情况下实现了鲁棒的内容驱动路由机制,即使在缺乏显式指令的严格盲推理协议下仍能保持语义特化能力。代码发布于:https://github.com/konodiodaaaaa1/Conflict-Driven-Subspace-Pruning-Mixture-of-Experts

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员