UN to Advance Global AI Governance with Open Debate and New Bodies
The United Nations moves to formalize AI governance through a global forum and an independent scientific panel, as security and diplomacy grapple with safeguards and international law.

Artificial intelligence is moving from the fringes of global policy to the core of the United Nations' agenda this week, as world leaders and diplomats tackle a range of complex challenges at the annual high-level meeting. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's rapid capabilities have astonished many while prompting warnings from experts about risks ranging from engineered pandemics to large-scale disinformation. Advocates urge safeguards and global governance to keep pace with fast-moving developments.
The U.N. has advanced a governance architecture intended to curb misuse while encouraging collaboration. Last month, the General Assembly adopted a resolution to set up two key bodies on AI — a global forum and an independent scientific panel of experts. The forum will provide a venue for governments and other stakeholders to discuss international cooperation and solutions. Its formal meetings are planned for Geneva next year and in New York in 2027. Separately, the U.N. Security Council will hold an open debate on AI on Wednesday to consider how it can help ensure responsible AI under international law and support peace processes. On Thursday, Secretary-General António Guterres will launch the Global Dialogue on AI Governance during the Security Council's annual meeting. Meanwhile, recruitment is expected to begin to fill 40 expert slots on the scientific panel, including two co-chairs — one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate-change panel and COP meetings.
The move is being described as a symbolic triumph by some diplomats and scholars, even as others warn the new mechanisms may be largely powerless without stronger legal teeth. Isabella Wilkinson, a research fellow at Chatham House, wrote that the bodies represent the world’s most globally inclusive approach to governing AI, but cautioned that the practical impact depends on implementation and political will. The Security Council debate and the launch of the Global Dialogue are intended to seed collaboration among governments, tech firms, researchers and civil society as the governance framework takes shape.
Ahead of the meeting, a group of influential AI experts urged governments to set red lines for AI by the end of next year, arguing for minimum guardrails to prevent the most urgent and unacceptable risks. The group includes senior figures from OpenAI, Google's DeepMind and Anthropic, the lawmakers and researchers who call for an internationally binding agreement on AI. They point to treaties banning nuclear testing, prohibitions on biological weapons and protections for the high seas as precedents for a binding approach to global risks. “The idea is very simple,” said Stuart Russell, an AI professor at the University of California, Berkeley. “As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access.”
Russell suggested that U.N. governance could resemble the International Civil Aviation Organization, an agency that coordinates with safety regulators across countries to ensure consistent standards. Rather than laying out a fixed set of rules, he proposed a flexible “framework convention” that could be updated as AI technologies evolve. The debate at the Security Council and the broader Global Dialogue aim to establish a path toward this kind of adaptable framework, balancing innovation with safeguards and accountability.
The new bodies are not without critics. Some analysts say the governance architecture risks becoming a symbolic gesture if not backed by binding obligations and enforceable standards. Yet supporters argue that establishing inclusive, internationally recognized processes can create common reference points for safety, ethics, transparency and accountability as AI systems scale and permeate more aspects of society.