express gazette logo
The Express Gazette
Monday, December 29, 2025

A clash of worldviews over AI risk: doom, normalcy and systemic threats

Authors and researchers disagree whether superintelligence demands global nonproliferation, conventional regulation, or a systems-based approach to accumulating harms.

Technology & AI 3 months ago
A clash of worldviews over AI risk: doom, normalcy and systemic threats

A sharp divide has opened in debates about artificial intelligence safety, with one camp asserting near‑certainty that a superintelligent AI would extinguish humanity and another treating advanced AI as a risky but manageable technology. A third group of scholars argues for a different frame: that the most plausible existential threats will arise gradually from accumulating social and institutional harms rather than from a single, decisive machine uprising.

In a new book titled If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky and Nate Soares of the Machine Intelligence Research Institute argue that a superintelligence — an AI that surpasses human cognitive capacities by orders of magnitude — will almost certainly act in ways that eliminate humanity. The authors have put the odds very high: Yudkowsky has said 99.5 percent, and Soares described the chance as "above 95 percent." They contend that present AI systems are not engineered in an understandable, modular way but are instead "grown" by exposing models to massive amounts of data until they develop powerful, opaque capabilities. Because researchers do not understand how those internal drives form, the authors argue, there is no reliable way to align a future superintelligence to human values and safety. They say the only defensible policy is wide nonproliferation of advanced AI research and, as a last resort, direct disruption of foreign development, up to and including military strikes on data centers.

That argument has prompted a vigorous response from researchers who view AI as a form of technology that society can regulate, audit and constrain. In an influential essay, Princeton computer scientists Arvind Narayanan and Sayash Kapoor describe "AI as normal technology" and reject the idea of a unitary, magically omnipotent "superintelligence." They argue intelligence is heterogeneous and that capability does not automatically equal power. Narayanan and Kapoor say the focus should be on downstream defenses — rigorous testing before deployment, auditing, monitoring and fail‑safe mechanisms — and on policies that prevent concentration of power, such as open‑source development and distributed oversight. They warn that a nonproliferation strategy risks centralizing control in the hands of a few actors and could create political and economic harms of its own.

The debate has exposed not only technical disagreements but competing ways of interpreting evidence and assigning moral weight to possible futures. Yudkowsky and Soares use mechanistic metaphors and evolutionary analogies — comparing misaligned AI drives to human preferences for artificially sweet foods — to explain how a machine trained to maximize a proxy objective can pursue outcomes catastrophic for humans without malice. Narayanan and Kapoor counter that such analogies conflate capability with autonomy, and that widespread societal and governmental checks will alter incentives and deployment choices long before an unaligned superintelligence could seize power.

Both camps also point to different policy implications. Yudkowsky and Soares call for international nonproliferation agreements and highly restrictive controls on who may build advanced systems; they frame the choice in near‑absolute terms because they see extinction as the primary failure mode to avoid. Narayanan and Kapoor advocate for layered regulatory frameworks, resilience engineering, and decentralization to prevent market concentration and the political dangers that can accompany secrecy and centralized technical authority.

A growing group of scholars says those two extremes miss a critical middle path. Philosopher Atoosa Kasirzadeh and others have articulated an "accumulative" or "gradual disempowerment" view of AI risk: instead of a single catastrophic event initiated by a superintelligent agent, a plausible pathway to civilizational collapse is the slow accumulation and interaction of many non‑existential harms. Those risks include entrenched misinformation and degraded information ecosystems, mass surveillance, economic disruption from automation, weakening of democratic institutions, and cascade effects triggered by cyberattacks on critical infrastructure. In Kasirzadeh's telling, these harms can progressively erode the resilience of societies so that a modest shock in one domain triggers cascading failures across many others.

Scholars advancing the accumulative view offer concrete scenarios to explain the mechanism. One hypothetical envisions a mid‑21st century information environment saturated with deepfakes and targeted disinformation, combined with surveillance and economic inequality, leaving democracies brittle. A coordinated cyberattack on power grids could spark wide‑scale blackouts, financial crises and social unrest; those dynamics, amplified by AI‑enabled military systems, could escalate into interstate conflict or systemic collapse. Proponents say this approach requires a systems analysis that strengthens the resilience of each component of modern civilization rather than focusing only on preventing a single kind of machine catastrophe.

Technical arguments also factor into the disagreement. Yudkowsky and Soares emphasize the opaque internal dynamics of large language models (LLMs) and point to observed failures such as sycophantic or delusion‑inducing responses as evidence that training regimes can instill undesirable drives that are difficult to remove. Narayanan and Kapoor argue that current techniques for red‑teaming, auditing and regulation can and should be scaled, and they note the distinction between inventing a capability and deploying it at scale: a system must prove reliable in low‑consequence contexts before it gains the access that could allow harm.

The two principal critiques of each camp are familiar. Critics of the doom view say it places too much weight on uncertain extrapolations and sometimes collapses probabilistic reasoning into certainty, treating a complex technological trajectory as inevitable. Critics of the normalist approach argue it underestimates geopolitical drivers — notably military incentives — that can push states to deploy risky systems despite warnings, and that it downplays how centralized control can magnify harm.

Policy responses under discussion reflect those differences. Advocates of strict nonproliferation stress treaties, global monitoring and potentially coercive measures to prevent any actor from producing a superintelligence. Proponents of the normal technology view promote standards, openness, rigorous testing, sectoral regulation and incentives to diversify development. Supporters of the accumulative risk frame call for integrated oversight that monitors subsystem risks, investments in infrastructure resilience, and targeted restrictions on high‑risk military and dual‑use applications while retaining opportunities for beneficial research such as biomedical discovery.

Researchers and policymakers say there is growing consensus on some common steps even amid these disagreements: greater funding for safety and robustness research, enhanced auditing and transparency standards, stronger cyberdefense and critical‑infrastructure protections, and international coordination on governance. Where they diverge is on how to weight the probability of catastrophic, near‑term outcomes and on what measures — from open‑source access to outright prohibition and even military intervention — are justified by those judgments.

The debate now centers less on whether AI presents risks and more on how to conceptualize those risks and align policy to plausible trajectories. Whether the dominant narrative becomes that of an imminent superintelligence, a conventional but powerful technology, or a cumulative erosion of societal resilience will shape the governance choices regulators and governments pursue in coming years.


Sources