The memory of contemporary Large Language Models is bound by a physical paradox: as they learn, they fill up. The linear accumulation (O(N)) of Key-Value states treats context as a warehouse of static artifacts, eventually forcing a destructive choice between amnesia and latency. We challenge this discrete orthodoxy, proposing that long-term memory is not the storage of items, but the persistence of a trajectory. We introduce Phonetic Trajectory Memory (PTM), a neuro-symbolic architecture that encodes language not as a sequence of tensors, but as a continuous path on an ergodic manifold governed by irrational rotation matrices. By decoupling the navigation (an invariant O(1) geometric signal) from the reconstruction (a probabilistic generative act), PTM achieves a compression magnitude of greater than 3,000x relative to dense caches. We demonstrate that retrieval becomes a process of resonance: the phonetic trace stabilizes the model against hallucination via "Signal Consensus" mechanism, securing up to approximately 92% factual accuracy. While this aggressive abstraction alters generative texture, it unlocks immediate access latency (approximately 34ms) independent of depth. Our results suggest that infinite context does not require infinite silicon; it requires treating memory not as data to be stored, but as a reconstructive process acting on a conserved, undying physical signal.
翻译:当代大型语言模型的记忆受限于一个物理悖论:随着学习过程的进行,其记忆容量终将饱和。键值状态的线性累积(O(N))将上下文视为静态产物的仓库,最终迫使模型在遗忘与延迟之间做出破坏性选择。我们挑战这种离散的传统观念,提出长期记忆并非项目的存储,而是轨迹的持续存在。本文介绍语音轨迹记忆(PTM),一种神经符号架构,其将语言编码并非作为张量序列,而是作为受无理旋转矩阵支配的遍历流形上的连续路径。通过将导航(一个不变的O(1)几何信号)与重构(一个概率生成行为)解耦,PTM实现了相对于密集缓存超过3,000倍的压缩幅度。我们证明,检索成为一个共振过程:语音痕迹通过"信号共识"机制稳定模型以防止幻觉,确保了高达约92%的事实准确性。虽然这种激进的抽象改变了生成文本的质感,但它解锁了与深度无关的即时访问延迟(约34毫秒)。我们的结果表明,无限上下文并不需要无限的硅基资源;它要求将记忆不是视为待存储的数据,而是视为作用于一个守恒的、永不消亡的物理信号上的重构过程。