Premature semantic collapse -- the forced early commitment to a single meaning -- remains a core architectural limitation of current language models. Softmax-driven competition and greedy decoding cause models to discard valid interpretations before sufficient context is available, resulting in brittle reasoning and context failures. We introduce Non-Resolution Reasoning (NRR), a general computational framework that preserves semantic ambiguity during inference and performs resolution only when explicitly required. NRR integrates three components: (1) Multi-Vector Embeddings that maintain multiple viable interpretations per token, (2) Non-Collapsing Attention that prevents winner-take-all dynamics across layers, and (3) Contextual Identity Tracking (CIT), which assigns context-specific identities to recurring entities (e.g., distinguishing "Dr. Smith the cardiologist" from "Dr. Smith the researcher"). These mechanisms are unified by an external Resolution Operator $ρ$ that makes semantic commitment explicit, controllable, and task-dependent. Unlike standard architectures, NRR separates representation from resolution, allowing a single model to shift between creative, factual, and ambiguity-preserving reasoning without retraining. A synthetic evaluation demonstrates NRR's ability to preserve ambiguity and track context: CIT-enhanced models achieve 90.9% accuracy on out-of-distribution identity-shift tasks, compared to 9.1% for transformer baselines. NRR provides a principled alternative to premature collapse, reframing ambiguity as an explicit representational state rather than a failure mode. The question is not whether AI should resolve ambiguity, but when, how, and under whose control.
翻译:过早的语义坍缩——即被迫过早地确定单一含义——仍然是当前语言模型的一个核心架构限制。Softmax驱动的竞争和贪婪解码导致模型在获得足够上下文之前就丢弃了有效的解释,从而导致脆弱的推理和上下文失效。我们引入了非消解推理(NRR),这是一个通用的计算框架,在推理过程中保持语义模糊性,仅在明确需要时才进行消解。NRR整合了三个组件:(1) 多向量嵌入,为每个标记保持多个可行的解释;(2) 非坍缩注意力,防止跨层的赢者通吃动态;(3) 上下文身份追踪(CIT),为重复出现的实体分配特定于上下文的身份(例如,区分“心脏病专家史密斯博士”和“研究员史密斯博士”)。这些机制由一个外部消解算子 $ρ$ 统一,该算子使语义承诺变得明确、可控且依赖于任务。与标准架构不同,NRR将表示与消解分离,允许单个模型在创造性、事实性和保持模糊性的推理之间切换,而无需重新训练。一项合成评估证明了NRR保持模糊性和追踪上下文的能力:CIT增强的模型在分布外身份转换任务上达到了90.9%的准确率,而Transformer基线模型仅为9.1%。NRR为过早坍缩提供了一个原则性的替代方案,将模糊性重新定义为一种明确的表示状态,而非一种失效模式。问题不在于AI是否应该消解模糊性,而在于何时、如何以及在谁的控制下进行消解。