express gazette logo
The Express Gazette
Saturday, December 27, 2025

UN AI warnings grow louder as global governance efforts expand; California advances AI safety law

The United Nations launches a Global Dialogue on Artificial Intelligence Governance and new oversight bodies, while California moves to codify safety measures for the largest AI developers. Events in New York and Montreal highlight a pus…

Technology & AI 3 months ago
UN AI warnings grow louder as global governance efforts expand; California advances AI safety law

The United Nations opened a new chapter in global AI governance this week, launching the Global Dialogue on Artificial Intelligence Governance in a three-hour informal meeting at UN Headquarters in New York. The forum brings together UN member states, private-sector actors, and civil society groups to coordinate on how to steer artificial intelligence in ways that maximize benefits while addressing risks. For the first time, every country will have a seat at the table of AI, United Nations Secretary-General António Guterres said, signaling a bid to move beyond ad hoc national approaches toward a more inclusive, societally grounded policy path. Guterres also announced nominations to an International Independent Scientific Panel on AI to provide impartial scientific analysis of the technology’s impacts, and he floated consultations on the possible establishment of a Global Fund for AI Capacity Development to help lower-income countries build governance and safety capabilities.

The week’s broader AI dialogue unfolded against a backdrop of intense attention at the U.N. General Assembly. On Monday, Nobel Peace Prize laureate Maria Ressa helped galvanize a campaign for what organizers described as “AI Red Lines,” urging governments to step in to prevent universally unacceptable risks from AI. More than 200 prominent politicians and scientists signed onto the statement, including 10 Nobel Prize winners. On Wednesday, the Security Council held an open debate on “artificial intelligence and international peace and security.” For more than three hours, delegates underscored that AI holds promise and peril, describing it as a fact of modern life and pressing for immediate regulatory guardrails—especially around autonomous weapons and nuclear dimensions. One particularly pointed contribution came from Belarus, which warned that a new curtain is being created, not ideological this time, but technological to divide the West and the Global Majority and to usher in a new era of neocolonialism. The representative said, “This is leading to a deadlock and abyss. AI should be available and accessible to all countries barring none.”

A key breakthrough event came on Thursday with a three-hour informal meeting that launched the Global Dialogue on Artificial Intelligence Governance—a forum meant to give every country a seat at the table of AI governance, with participation also from private-sector firms and civil society organizations. Guterres framed the gathering as a turning point in which international collaboration could mature quickly: “For the first time, every country will have a seat at the table of AI.” He also announced that nominations were now open for the International Independent Scientific Panel on AI, designed to provide independent scientific evidence on AI’s impacts. The secretary-general signaled that consultations would soon begin on establishing a Global Fund for AI Capacity Development to help fund governance, safety, and capability-building initiatives around the world. The dialogue drew participation from Bhutan, the United Arab Emirates, and China, among many others, as governments, tech executives, and scholars offered accounts of how AI is spurring economic growth while raising questions about governance gaps and equity. Common threads emerged: the need to bridge gaps in capacity, ensure broad sharing of benefits, consult diverse stakeholders, and redress inequalities that AI could intensify. Spain’s prime minister, Pedro Sánchez, captured a chorus of leaders when he said, “The rise of AI is unstoppable. But it cannot be ungovernable.”

It remains unclear how far UN advisories will translate into binding rules, given that major Silicon Valley players operate largely outside formal UN jurisdiction. Yet the UN’s expanded convening represents a global turn toward a more inclusive, norms-based approach to AI governance, an attempt to align the technology with broad international interests even as the private sector continues to innovate at speed.

In California, lawmakers moved to codify a more incremental form of AI safety oversight. Last year, Governor Gavin Newsom vetoed SB 1047, a more sweeping state AI safety bill, telling researchers to come back with a revised proposal. Lawmakers responded with SB 53, a watered-down version that retains important protections while avoiding some of the stricter provisions of 1047. Newsom’s stance appeared to soften on stage in New York on Wednesday, where he signaled his support for SB 53: “We have a bill—forgive me, it's on my desk—that we think strikes the right balance. We worked with industry, but we didn't submit to industry.”

SB 53 is notable for creating whistleblower protections for AI employees and for requiring the largest AI developers to publish safety plans and to report safety incidents. Proponents framed the bill as a pragmatic compromise that still preserves meaningful guardrails. Anthropic, among others, has voiced support for the measure, and Dean Ball—who helped shape the AI policy agenda for that administration—also backed the approach. Sacha Haworth, executive director of the Tech Oversight Project, framed SB 53 as a “major victory” even though it does not include many of the most stringent protections previously proposed. “Policymaking necessarily is about compromising. You want to push the envelope as far as you can, but you can’t give up on incrementalism,” Haworth said. “The whistleblower protections are tremendous, and the fact that the regulations in SB 53 apply to the largest AI developers is crucial.”

Meanwhile, the TIME team has been on the ground reporting on the convergence of policy, industry, and civil society. At the All In conference in Montreal on Sept. 24, Yoshua Bengio—one of the signatories of the AI Red Lines campaign and founder of the nonprofit LawZero—sat for a wide-ranging onstage interview with TIME’s Harry Booth. The exchange focused on the development of reasoning models, the shifting emphasis of governance toward safety in the face of commercial imperatives, and the practical steps needed to steer AI toward norms that safeguard humanity. Bengio emphasized that while today’s systems may not pose immediate existential threats, future generations with more powerful reasoning capabilities could be misused if governance does not keep pace. “It's not that these systems are going to kill anyone tomorrow,” he told Booth, “but future generations of these systems, if science continues to advance, will have stronger and stronger reasoning abilities, more and more knowledge.” He added that if those systems cannot be steered to comply with human norms, they could be used by ill-intentioned actors to do immoral things, and “we could lose control of them.” The interview also spotlighted LawZero’s mission to redesign AI safety in a way that aligns with societal values, even as commercial pressures push for faster deployment.

The Montreal gathering underscored a broader tension in the AI debate: the tension between rapid innovation and the safety, fairness, and accountability that a global governance regime would aim to deliver. Time’s coverage at All In also highlighted how policymakers and executives are balancing public assurances about innovation with calls for guardrails. The forum in New York and the Montreal conversations together map a pattern in which government officials seek to establish norms and practices that can keep pace with a sector characterized by global reach and rapid scale.

For readers seeking a broader cultural-Kairos lens on the moment, TIME’s weekly briefing also points to cross-cutting threads such as how political narratives around AI echo historical episodes of great-power competition. One noted connection is to historical narratives about control of critical resources and mindshare, with analysts arguing that modern AI development is fueling strategic competition on a global stage. In a parallel thread, a documentary work discussed in the briefing—Soundtrack to a Coup d’État—was highlighted for how its Cold War-era tale of global power dynamics and resource control resonates with contemporary discussions of AI governance, sovereignty, and supply chains. The film’s focus on cobalt mining in the Congo as a critical resource underscores ongoing concerns about labor, inequality, and the extractive dynamics that underlie modern technology supply chains. The piece suggests that, just as in the past, the geopolitics of AI will involve questions of access to critical inputs and to the data that fuel machine learning, with real consequences for people and communities around the world.

As the week closes, observers note that the UN’s expanded diplomacy and California’s evolving safety framework signal a broader shift: policymakers are seeking interoperable standards and credible evidence to inform decisions, even as the private sector moves quickly. The practical impact remains to be seen, and enforcement gaps will matter in determining how far the current momentum translates into tangible protections for people and societies. Yet the momentum is unmistakable: a global conversation anchored in governance, safety, equity, and international cooperation now sits at the center of the AI discourse, with governments and researchers alike signaling that this moment demands more formal mechanisms, more diverse voices, and more attention to the consequences of rapid technological change.

As the week drew to a close, the conversation extended beyond the halls of the UN and into conference rooms in Montreal and policy rooms in Sacramento, illustrating a global pattern: a push to translate talk into action, and a willingness among diverse stakeholders to experiment with governance mechanisms that can adapt to a fast-moving technology landscape. The coming months will test whether the new Global Dialogue on AI Governance can produce concrete pathways for collaboration and accountability, and whether SB 53 can serve as a workable model for other jurisdictions seeking to balance innovation and safety. In the longer arc, the world will watch to see whether the UN’s convening translates into governance that can withstand market incentives, geopolitical rivalries, and the evolving capabilities of AI—and whether safeguards can be designed to protect people without stifling responsible progress.

TIME at All In Montreal conference with Yoshua Bengio


Sources