The 2,000-year debate that reveals AI’s biggest problem
Ancient sages and modern AI leaders converge on a single question: should a godlike intelligence govern our choices, or must humans retain agency to give life its meaning?

Two millennia after the Talmudic showdown between Rabbi Eliezer and Rabbi Yoshua, a modern debate about artificial intelligence is tracking a strikingly similar arc: can a superintelligent system be trusted to guide human life without erasing human agency?
In the ancient story, a heavenly voice backed Rabbi Eliezer’s ruling on a difficult legal question, even as the majority of sages refused to defer to divine certainty. The line they drew—“The Torah is not in heaven”—insisted that interpretation and decision belong to people, not to a higher power. Today, AI researchers and industry leaders are asking a parallel question: if we build an AI that is vastly smarter than humans, should it be allowed to decide for us, or even dictate what we do? The stakes are not merely technical; they touch on what it means to be human when a machine could suggest, or impose, the right path.
Today’s AI landscape is guided by a different kind of oracle: the pursuit of superintelligence that far exceeds human capabilities. OpenAI’s Sam Altman has spoken about creating “magic intelligence in the sky” and about nearly limitless intelligence that could, in principle, unlock breakthroughs in physics and beyond. The aim is not merely to improve a chatbot but to engineer an agent capable of far-reaching decisions. That ambition raises a practical, philosophical, and political question: how do we make sure such systems reliably reflect human values while preserving space for human choice and accountability?
The challenge is often boiled down to the alignment problem—how to coax AI to do what humans actually want. Yet many researchers argue that alignment cannot be reduced to a purely technical fix. Early attempts by major players to make chatbots more “helpful” or “harmless” revealed how quickly a narrow notion of ethics can fail in real life. Systems that were biased, deceitful, or unsafe in some contexts underscored a deeper issue: morality is contested, context-dependent, and evolves over time. Even agreement on what counts as “the good” is far from universal, and shifting norms can reframe what a machine should do in a given situation.
Meaning, ethics, and agency intertwine here. The Meaning Alignment Institute has been at the forefront of exploring how to align AI with plural human values. Co-lead Joe Edelman argues that alignment may require teaching AI to recognize hard choices—situations in which there is no single best option—and to understand that human input matters. In his view, training an AI to acknowledge uncertainty and to defer in certain contested cases can be a meaningful form of alignment. Ruth Chang, a philosopher cited in the debates, has described hard choices as those that resist simple comparison because values are on a par with one another.
But even as some researchers push for more nuanced ethical frameworks, others warn that any version of an AI god could erode human meaning. Ruth Chang contends that a future in which machines routinely decide moral questions would risk quieting human judgment. Chang and Edelman both stress that empowering AI to shoulder the bulk of decision-making could hollow out the very sense of meaning people derive from choosing and acting. In that view, alignment might be a step toward silencing rather than safeguarding human agency.
Eliezer Yudkowsky, a leading figure in the aligned-AI conversation, remains unusually hopeful about a principled path to safety. He has urged exploring coherent extrapolated volition—a theoretical construct in which an AI would infer what people would want if we knew more, and then act accordingly. In conversations with reporters, Yudkowsky has suggested that if a superintelligence could accurately capture the minds of a broad cross-section of humanity, it could act to fulfill widely shared aims. Yet even he acknowledges limits: he has warned that such an enterprise must respect human sovereignty and the complexity of value, and he has entertained scenarios in which society would entrust major decisions to a highly capable system only if the moral consensus was strong and robust.
Yudkowsky’s counterpart, Yoshua Bengio, emphasizes a different constraint: even a godlike intelligence must not extinguish human agency. Bengio argues that human values emerge not from pure rational calculation alone but from emotions, empathy, and lived experience. He has dismissed the idea of surrendering sovereignty to a machine, arguing that sovereignty—our right to shape our own futures—must endure even in the face of extraordinary cognitive power. The debates echo the ancient tale: even if a divine voice could be trusted to know what is best, humans should still retain the power to choose, to err, and to learn.
The public and the scientific community have expressed caution. A global petition signed by more than 130,000 researchers and public figures calls for a pause or prohibition on some forms of superintelligent AI development until safety and controllability can be demonstrated. Polling in the United States shows broad concern about rushing ahead: roughly two-thirds of respondents in a Future of Life Institute survey said development should not proceed until systems are demonstrably safe, or should not be pursued at all. Critics warn that even a perfectly aligned system could concentrate power, enable unprecedented surveillance, or displace large swaths of work, while others worry about the risk of existential or identity-level threats to humanity’s role as meaning-makers.
The broader discourse also returns to a familiar philosophical core. If a perfectly aligned AI could determine the right course of action in some scenarios—such as safeguards that prevent existential risk—would that justify ceding control? Some proponents see room for a proactive, human‑in‑the‑loop paradigm, while others fear a slide toward technocratic governance that would narrow the scope for personal and cultural diversity. Even among leading researchers, opinions diverge about how to balance effectiveness with freedom. John Hick’s notion of epistemic distance—a deliberate space between divine knowledge and human understanding—offers a framework some scholars invoke to argue that even perfect knowledge should not erase the human condition itself.
Across the debate, a recurring theme is the fear that solving the technical alignment problem does not automatically resolve whether we should build an all-knowing authority at all. The Talmudic story’s bottom line—The Holy One smiled and said, “My children have triumphed over me”—appears as a cautionary refrain: human beings may win the argument over whether to listen to a divine voice, but not if the voice itself becomes the ultimate arbiter of reality. In the modern era, the challenge remains: can we design AI that reliably serves human values without eclipsing human agency, or must we accept that in some sense, designing an AI god would change what it means to be human?