Tech

The time for AI self-regulation is over: Calls for red lines

Vernon Pillay|Published

.

Image: Westend61/Getty images; sutlafk/Getty Images

Charbel-Raphaël Ségerie, executive director of the Paris-based Centre pour la Sécurité de l’IA (CeSIA), used a short, emphatic post on X on Tuesday to amplify a wider and rapidly growing effort: a “Global Call for AI Red Lines” that seeks an international political agreement to ban or strictly limit the most dangerous AI behaviours and deployments before they become entrenched.

The initiative, announced in late September at the opening of the UN General Assembly high-level week, stitches together academic heavyweights, Nobel laureates, civil-society groups and government voices who say piecemeal self-regulation isn’t going to cut it.

Why “red lines” now?

Ségerie and several global leaders argue that AI is transitioning from a tool that passively responds to instructions into a breed of systems capable of acting autonomously and at scale, from automating deception campaigns to autonomously designing biological agents, or otherwise enabling harms that would be hard or impossible to undo.

The Global Call frames “red lines” as narrow, operational prohibitions focused on the most severe, potentially irreversible risks, not a blanket moratorium on AI innovation.

It should be noted that OECD and other policy forums have also begun treating red lines as a practical governance tool, arguing they can be made verifiable and enforceable.

What do the red lines look like in practice?

The proponents sketch two categories:

(1) unacceptable uses, particular ways humans might deploy AI (for example, mass unconstrained surveillance or AI-generated child sexual abuse material), and

(2) unacceptable behaviours, capabilities or actions of AI systems themselves (for example, self-replication without oversight or systems that autonomously pursue goals that can’t be reliably constrained).

The idea is to require proof in advance that systems won’t cross those lines, including technical testing and independent verification, rather than relying solely on liability after harm occurs. 

Ségerie has been explicit and said that "the date for taking this seriously is now, not later."

He has also underlined a shared fear that a handful of organisations with vast computing and data resources could cross dangerous thresholds before legal guardrails are in place, and that retroactive remedies may be too little or too late.

UN Support

UN Secretary-General Antonio Guterres has also called for a firm call to action for red lines

He called for a ban on lethal autonomous weapons systems operating without human control and said that leaders need a legally binding instrument by 2026. 

Chia has also pushed for the establishment of red lines. 

“It’s essential to ensure that AI remains under human control and to prevent the emergence of lethal autonomous weapons that operate without human intervention,” Ma Zhaoxu, China’s Executive Vice Minister of Foreign Affairs, said. 

Zhaoxu noted that at the debate of the UN Security Council on Artificial Intelligence and International Peace & Security.

Practical next steps

The red-lines campaign recommends a few concrete paths forward:

(1) precise, testable definitions of prohibited behaviours and uses;

(2) independent audits and verification regimes;

(3) sanctions and liability structures for violations; and

(4) international cooperation to avoid regulatory arbitrage.

Organisations such as CeSIA and academic centres for human-compatible AI are already developing benchmarks and safety engineering practices intended to underpin any legal standard. 

FAST COMPANY