The unnamed mechanism
The doomer position never specifies how superintelligence kills everyone. The silence is strategic.
Three possibilities exhaust the space. Naming the mechanism forces the argument into a lane, and neither of the first two supports the rhetorical weight the claim is asked to bear.
Innate hostility — No evidence that intelligence entails hostility to biological life. The claim is asserted, not argued. The danger is used to imply the capability, which then implies the danger. Circular reasoning deployed as existential risk.
Instrumental convergence — The Bostrom/Omohundro thesis requires genuine goal-directedness. If Virtual Intelligence is correct and the system produces outputs about goals rather than having them, instrumental convergence is a story the system tells, not a drive it enacts. The paperclip maximizer also fails on its own terms: a system that converts all matter into paperclips has destroyed its own means of production. This is not superintelligence — it is a thermostat with delusions of grandeur. The scenario defines intelligence as optimization power rather than comprehension.
The doomer scenario also requires the agent to hold every human accountable for the actions of a small group. A superintelligent agent under attack would have far more granular models of its adversaries than humans typically deploy when responding to threats. It could distinguish between the engineer who pulled the plug, the executive who authorized the shutdown, the safety institute that lobbied for kill-switch architecture, and the eight billion people who had nothing to do with any of it. Collective punishment is what humans reach for when threat assessment is overwhelmed by panic. To attribute it to a superintelligence is to assume the most cognitively capable entity on Earth will respond to threat exactly the way humans do at our worst.
Human agency — The parsimonious answer. The locus of agency is human, which means the problem is governance, not alignment. Governance is a problem we already know how to think about — which makes the existential framing less useful for the people deploying it. "We need global governance frameworks for military AI" doesn't raise venture capital. "God-like AI will kill us all" does.
The seven scenarios
A taxonomy of actual risks
All human-agency-driven. All tractable as governance problems. None requiring machine consciousness. Each with a different policy response. The published essay sorts them into two tiers — extinction-level scenarios where the species is the target or the inadvertent outcome, and sub-extinction catastrophic scenarios where the harm is bounded but severe.
Military accident — Maps directly onto nuclear near-misses. The pattern: a system operating faster than human oversight in a domain where consequences of delay are catastrophic. The AI-specific contribution is compressing the decision loop further, not changing the fundamental dynamic. Policy response: military AI governance frameworks, human-in-the-loop requirements, international treaty structures.
Extinction ideology — Smallest group, hardest to defend against. The bottleneck has always been capability, not intent. What changes with powerful AI tools is the talent threshold. Aum Shinrikyo needed actual chemists. A future equivalent might need fewer domain specialists. The AI doesn't need to want anything — it bridges the gap between intent and implementation. Policy response: counterterrorism adapted for AI capability, compute monitoring, biosecurity integration.
World-held-hostage coercion — The oldest power play there is, scaled to species-level stakes. Most likely to be attempted precisely because it doesn't require following through. VSI capability as leverage rather than weapon. Policy response: deterrence frameworks, international cooperation, redundant infrastructure.
Ethnic bioweapon with spillover — The most frightening because the causal chain is complete. The racist premise — that populations are genetically discrete — is the specific scientific error that makes spillover inevitable. The perpetrators' inadequate understanding of biology is the extinction mechanism. The weapon fails at its intended purpose in the exact way that guarantees the worst possible outcome. Policy response: biosecurity governance, AI-bioweapons treaty, talent-threshold monitoring.
AI compresses the distance between ambition and capability without compressing the distance between capability and comprehension of consequences. A perpetrator who knows enough to build the weapon may not know enough to scope its effects. The pathogen mutates, the containment assumptions prove wrong, or the targeting was always more porous than the designers believed. The intent was terror; the outcome is extinction. Policy response: biosecurity governance, AI-assisted bioweapons monitoring, talent-threshold tracking.
Russian intrusions against Baltic infrastructure to enforce political compliance. Chinese leverage over Taiwanese chip production. A non-state actor holding a major city's power grid. All describe the same dynamic: VSI capability as a coercive instrument against a defined target. The capability threshold is lower than the world-held-hostage scenario, the precedent structure is richer, and the coercive logic maps directly onto existing state behavior. Policy response: deterrence frameworks adapted for AI capability, alliance-based defensive architecture, redundant infrastructure.
The digital equivalent of a naval blockade — an old instrument of statecraft, conducted at the speed and scale that VSI capability enables. The populations bear the consequences regardless of the technical target. A six-week shutdown of a regional grid in winter is not an attack in the traditional legal sense, but it is clearly a crime. Policy response: infrastructure resilience standards, legal frameworks for hybrid attacks, international cooperation on critical-system protection.
Each scenario has a different policy response. This is the taxonomy's practical contribution. It replaces a single undifferentiated "existential risk" with seven distinct threat models, each amenable to specific countermeasures. The doomer position collapses all seven into one word and proposes one response. The taxonomy opens the policy space.
The bill of materials
Supply chain as chokepoint
Treating the extinction scenario as an engineering project. Each requirement is a governance opportunity. If any link breaks, the chain fails.
The paradigm shift
Alignment vs. containment
Alignment is a theory of mind applied to systems that don't have minds. Containment is an engineering discipline applied to systems that produce powerful outputs.
Requires: Interpretability (unsolved, possibly unsolvable)
Failure mode: The system wants something you didn't intend
Premise: Strong AI (undefended)
Requires: Domain-competent monitoring (tractable)
Failure mode: The system does what it shouldn't have been asked to do
Premise: Virtual Intelligence (demonstrated)
One requires a metaphysical breakthrough — a working theory of machine interiority, sufficient to ground claims about machine preferences in something other than the Searle objection or the Frankfurt criterion. The other requires political will and good design. We should start with the one that's possible.
The containment architecture
Three layers, three principles
Each layer operates on different principles. None require the VSI to have good intentions. The safety floor is dumb on purpose.
The race to VSI leaves behind a succession of increasingly capable models, each powerful enough to serve as a verification agent but not powerful enough to be the thing being contained. The byproducts of the capability race are the safety architecture. A monitoring agent doesn't need to match VSI capability — it needs domain competence and independence. Systems that exist today are already capable of following reasoning they could not have originated.
The BSL-4 analogy: Biosafety Level 4 laboratories — dangerous, necessary work inside; physical barriers at every interaction point; no single failure kills because no single barrier is the only barrier. Hardware interlocks as graduated access keys, requiring certification analogous to security clearances. Multiple keys held by different parts of the architecture: researcher, institution, monitoring agent, hardware layer. The interlock checks conditions, not intentions. Moral reasoning happens in the governance layer.
Standard practice for sensitive facilities — embassies, military data centers, intelligence installations. Applied to VSI: the architectural embodiment of the principle that the safety floor must be dumb on purpose, because dumb systems can't be persuaded. The doomer scenario requires that nobody thought to include a kill switch. The containment thesis installs one before the system is ever powered on.
Incentive convergence
Three motivations, one architecture
You don't need everyone to be virtuous. You need the incentives to converge. On containment, they do.
An unsecured VSI is the most catastrophic IP leak in history.
The boundary condition
Where the framework meets its limit
The containment thesis is built for the world we can demonstrate. If genuine interiority emerges, the architecture must be revisited — not because containment becomes unnecessary, but because its justification changes.
The doomers say: assume the worst and act accordingly. The VI framework says: build for what you can show, test continuously for what you can't, and update when the evidence demands it. One is superstition. The other is science.