Tech

Global leaders call for urgent AI regulations: Establishing 'red lines'

Vernon Pillay|Published

.

Image: RON

At the United Nations’ 80th General Assembly, a sweeping coalition of former heads of state, Nobel laureates, and AI pioneers launched a campaign urging governments to establish binding international “red lines” for artificial intelligence by the end of 2026.

The initiative, called the Global Call for AI Red Lines, warns that without enforceable guardrails, advanced AI systems could trigger cascading risks, including engineered pandemics, mass disinformation campaigns, systemic human rights abuses, and even the loss of human oversight over powerful autonomous systems. 

“Establishing red lines is a crucial step towards preventing unacceptable AI risks,” said Yoshua Bengio, a Turing Award winner widely considered one of the “godfathers” of AI.

Maria Ressa, Nobel Peace Prize laureate and journalist, cautioned that without safeguards, the world could soon face “epistemic chaos” and “engineered pandemics,” noting that global cooperation is the only way forward.

The campaign’s 200+ signatories and 70 partner organisations represent a rare convergence of AI scientists, policymakers, human rights advocates, and business leaders.

Notable supporters include OpenAI cofounder Wojciech Zaremba, Google DeepMind’s Ian Goodfellow, and Anthropic’s CISO Jason Clinton. The initiative is being coordinated by France’s Centre for AI Safety (CeSIA), The Future Society, and UC Berkeley’s Centre for Human-Compatible AI.

Why It Matters Now

The timing reflects both technological and geopolitical urgency. As AI models race toward dangerous capability thresholds, recent incidents, including AI-linked suicides, have spotlighted the very real human costs of uncontrolled deployment

Meanwhile, global policy efforts remain fragmented. While the European Union has pushed forward its AI Act and China is advancing national guardrails, the United States has been resistant to international regulation.

As Fast Company reported, the Trump administration broke with the UN earlier this month by rejecting calls for a legally binding global AI oversight body, opting instead for “voluntary frameworks” and domestic-led guardrails.

That divergence highlights a looming governance gap: AI is borderless, but policy isn’t. Stuart Russell, a leading AI safety researcher at UC Berkeley, put it bluntly: “The development of highly capable AI could be the most significant event in human history. It is imperative that world powers act decisively to ensure it is not the last.”

The Road Ahead

The Global Call for AI Red Lines draws on precedents like the Chemical Weapons Convention and bans on human cloning, arguing that similar prohibitions are needed for AI systems, especially those that could threaten security, democracy, or humanity’s survival.

Still, the U.S. stance raises questions about whether a global pact is politically feasible.

As Csaba Kőrösi, former president of the UN General Assembly, warned: “Within a few years, we will [meet intelligence greater than ours]. But we are far from being prepared for it in terms of regulations, safeguards, and governance.”

FAST COMPANY