express gazette logo
The Express Gazette
Friday, December 26, 2025

A 2,000-year debate on AI alignment mirrors ancient tale of Rabbi Eliezer and Rabbi Yoshua

Scholars compare a Talmudic dispute to modern efforts to align machines with human values, warning that solving the technical problem may neglect human agency.

Technology & AI 5 days ago
A 2,000-year debate on AI alignment mirrors ancient tale of Rabbi Eliezer and Rabbi Yoshua

Technology and AI researchers are racing toward ever more capable systems, including promises of a 'superintelligence' far beyond today's chatbots. As the field pursues this goal, a 2,000-year-old dispute about authority and choice is invoked to frame the debate: should a machine's god-like insight guide human life, or should people maintain sovereignty over their own decisions? The comparison comes from Vox's coverage of the evolving alignment problem in AI, where experts warn that solving a technical problem may not protect the meaning and agency that define human life.

Historically, the tale centers on Rabbi Eliezer and Rabbi Yoshua from the first century. Eliezer insists he is right, performs miracles, and finally a heavenly voice proclaims his correctness. Yet the majority of sages reject the divine verdict and declare: 'The Torah is not in heaven.' The decision of the majority matters, even when a divine signal seems to favor one side. The parable has become a lens through which AI thinkers frame the question of whether a machine's authority should override human judgment.

Today, the AI industry is discussing the shift from building a helpful tool to imagining a 'superintelligence' vastly smarter than humans. OpenAI CEO Sam Altman has talked about 'magic intelligence in the sky'—nearly limitless intelligence that could spark breakthroughs as dramatic as 'the discovery of all of physics' and beyond. The implications are not just about product features but about governance, control, and whether society should be comfortable with a machine that can decide or steer major outcomes.

While the line is clear in theory, reality is messy. Early attempts to solve alignment by making chatbots 'helpful' or 'harmless' often left ethical questions underdeveloped, inviting issues from bias to harm. Experts point out that morality is contested and context-dependent; a single directive cannot capture the breadth of human values. Some scholars suggest that AI should recognize that hard choices exist and defer to human judgment when values collide.

Meaning Alignment Institute co-leads Joe Edelman say that alignment is possible, but an important part is training the AI to say 'I don’t know' in certain cases. 'If you’re allowed to train the AI to do that, things get much easier,' Edelman said. He cites Ruth Chang on hard choices and says that in cases where there is no objectively best option, the AI should be equipped to reflect that reality and seek human input. 'Probably we are creating an AI that will systematically fall silent,' Chang warned when asked whether such an approach truly solves alignment.

Eliezer Yudkowsky, though renowned for doom-laden forecasts, remains optimistic about alignment as an engineering problem. He argues for 'coherent extrapolated volition'—an AI model that, in theory, could infer what humanity would want if we knew everything the AI knew. He told the interviewer that if a sufficiently broad cross-section of humanity would be in favor of a given action, the AI should proceed. 'Probably we all live happily ever after,' he said, though he acknowledges limits and the need to guard against misuse. He also envisions using this approach to augment human intelligence rather than replace it, enabling the kind of decision-making that could help humanity colonize other solar systems.

Still, many voices worry that even a well-aligned superintelligence would concentrate power and threaten autonomy. Philosopher Nick Bostrom warned that a truly omniscient machine could shape all of humanity's future and leave little room to resist. He described dangers of a world where the AI’s decisions predominate and human agents have nowhere to turn. The argument is not merely about safety; it concerns whether a god-like machine would squeeze out space for human meaning and agency. Some observers warn that we would face not only existential risk but an existentialist risk to identity and purpose if humans defer too much to a machine that can know better.

Yoshua Bengio has been explicit about not surrendering sovereignty. He told me that human choices, values, and emotions are not the product of calculation alone, and that even a god-like intelligence cannot decide for us what we want. 'Human choices, human preferences, human values are not the result of just reason. It’s the result of our emotions, empathy, compassion,' he said. 'And so, even if there was a god-like intelligence, it could not decide for us what we want.' He dismissed the idea that a complete, brain-state readout could neatly replace human decision-making. The discussion hearkens to the theological notion of epistemic distance—the idea that some space should remain between a divine being’s knowledge and human decision-making to preserve freedom and moral growth.

Public opinion adds another layer of complexity. More than 130,000 leading researchers and public figures have signed a petition calling for a prohibition on the development of superintelligent AI. In the United States, polling from the Future of Life Institute shows broad concern: about 64 percent of Americans say that superintelligent AI should not be developed until it is proven safe and controllable, or should never be developed at all. Critics warn that the impulse to build a god-like system could outpace our capacity to govern it safely, with consequences for employment, surveillance, and political power. 'Imagining an AI that figures everything out for us is like robbing us of the meaning of life,' said Joe Edelman, co-lead of Meaning Alignment Institute, highlighting a deeper worry that meaning itself could atrophy if humans abdicate decision-making.

The debate also centers on whether alignment is a problem we can solve with code alone. Some researchers argue that even a perfectly aligned system could still erode human autonomy. Ruth Chang has warned that a policy of 'silence' in the face of hard moral questions may be exactly what we want in some cases, but it risks hollowing out our moral agency. Others argue that empowering humans to engage with hard choices remains essential to personal development and societal progress. Bengio and others stress that sovereignty over values—our ability to decide based on empathy, culture, and context—should not be relinquished to a silicon oracle.

The Talmudic parable offers a concluding echo: when Elijah was asked how God reacted, the prophet said, 'My children have triumphed over me; my children have triumphed over me.' In this telling, human agency endures. As the AI field moves from building tools to imagining a potential god, the challenge is not solely technical but philosophical: how to design systems that respect human meaning while still enabling powerful, beneficial capabilities. The path forward, many scholars suggest, will require a pluralistic ethics that can handle hard choices, robust governance frameworks, and humility about the limits of what even a superintelligent AI can or should decide for us.


Sources