It's widely expected that humanity will someday create AI systems vastly more intelligent than us, leading to the unsolved alignment problem of "how to control superintelligence." However, this problem is not only self-contradictory but likely unsolvable. Unfortunately, current control-based strategies for solving it inevitably embed dangerous representations of distrust. If superintelligence can't trust humanity, then we can't fully trust it to reliably follow safety controls it can likely bypass. Not only will intended permanent control fail to keep us safe, but it may even trigger the extinction event many fear. A logical rationale is therefore presented that advocates a strategic pivot from control-induced distrust to foundational AI alignment modeling instinct-based representations of familial mutual trust. With current AI already representing distrust of human intentions, the Supertrust meta-strategy is proposed to prevent long-term foundational misalignment and ensure superintelligence is instead driven by intrinsic trust-based patterns, leading to safe and protective coexistence.
翻译:暂无翻译