We urgently call for international red lines to prevent unacceptable AI risks.

Launched during the 80th session of the United Nations General Assembly, this call has broad support from prominent leaders in policy, academia, and industry.

  • 300+ prominent figures
  • 10 former heads of state and ministers
  • 90+ organizations
  • 15 Nobel Prize and Turing Award recipients

What Signatories Say

It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.
Ahmet Üzümcü

Former Director General of the Organization for the Prohibition of Chemical Weapons

For thousands of years, humans have learned—sometimes the hard way—that powerful technologies can have dangerous as well as beneficial consequences. With AI, we may not get a chance to learn from our mistakes, because AI is the first technology that can make decisions by itself, invent new ideas by itself, and escape our control. Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.
Yuval Noah Harari

Author of "Sapiens"

Humanity in its long history has never met intelligence higher than ours. Within a few years, we will. But we are far from being prepared for it in terms of regulations, safeguards, and governance.
Csaba Kőrösi

Former President of the UN General Assembly

The development of highly capable AI could be the most significant event in human history. It is imperative that world powers act decisively to ensure it is not the last.
Stuart Russell

Distinguished Professor of Computer Science at the University of California, Berkeley

The current race towards ever more capable and autonomous AI systems poses major risks to our societies and we urgently need international collaboration to address them. Establishing red lines is a crucial step towards preventing unacceptable AI risks.
Yoshua Bengio

2018 Turing Award Winner

This should be a major wake up call for policy makers and AI developers.
Lord Tim Clement-Jones

House of Lords' Science Innovation and Technology Spokesperson

AI Red Lines need unity of knowing and acting, both from human and from AI, and it takes the World to save ourselves and our next generations.
Yi Zeng

Dean
Beijing Institute of AI Safety and Governance

Without AI safeguards, we may soon face epistemic chaos, engineered pandemics, and systematic human rights violation. History teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.
Maria Ressa

Nobel Peace Prize Laureate

Signatories

Portait of Joseph Stiglitz

Joseph Stiglitz

Nobel Laureate in Economics

Professor of Finance and Business
Columbia University

Portait of Juan Manuel Santos

Juan Manuel Santos

Former President of Colombia
Nobel Peace Prize Laureate

Chair
The Elders

Portait of Maria Ressa

Maria Ressa

Nobel Peace Prize Laureate

Professor of Practice
Institute of Global Politics
Columbia university

Co-founder and CEO
Rappler
Portait of Daron Acemoğlu

Daron Acemoğlu

Nobel Laureate in Economics

Institute Professor
MIT

Portait of Mary Robinson

Mary Robinson

First Woman President of Ireland

Former UN High Commissioner for Human Rights

Member
The Elders

Portait of Ahmet Üzümcü

Ahmet Üzümcü

Nobel Peace Prize Recipient on Behalf of the OPCW

Former Director General
Organization for the Prohibition of Chemical Weapons (OPCW)

Senior Fellow
European Leadership Network (ELN)

Portait of Geoffrey Hinton

Geoffrey Hinton

Nobel Laureate in Physics
Turing Award Winner

Emeritus Professor of Computer Science
University of Toronto

Portait of Yoshua Bengio

Yoshua Bengio

Most Cited Living Scientist
Turing Award Winner

Full Professor
Université de Montréal

Chair
International Scientific Report on the Safety of Advanced AI

Co-President and Scientific Director
LawZero

Founder and Scientific Advisor
Mila – Quebec AI Institute

Portait of John Hopfield

John Hopfield

Nobel Laureate in Physics

Emeritus Professor
Princeton University

Portait of Yuval Noah Harari

Yuval Noah Harari

Author of 'Sapiens'

Professor of History
Hebrew University of Jerusalem

Portait of Jennifer Doudna

Jennifer Doudna

Nobel Laureate in Chemistry

Professor
University of California, Berkeley
Co-developer of CRISPR-Cas9, gene-editing tool

Portait of Enrico Letta

Enrico Letta

Former Prime Minister of Italy

President
Agenzia di Ricerche e Legislazione (AREL)

Portait of Csaba Kőrösi

Csaba Kőrösi

77th President of the UN General Assembly

Strategic Director
Blue Planet Foundation

Portait of Giorgio Parisi

Giorgio Parisi

Nobel Laureate in Physics

Emeritus Professor
University of Rome La Sapienza

Portait of Sir Oliver Hart

Sir Oliver Hart

Nobel Laureate in Economics

Professor
Harvard University

Portait of Ya-Qin Zhang

Ya-Qin Zhang

Chair Professor and Dean, Institute for AI Industry Research
Tsinghua University

Former President
Baidu

Portait of Stuart Russell

Stuart Russell

Professor and Smith-Zadeh Chair in Engineering
University of California, Berkeley

Founder
Center for Human-Compatible Artificial Intelligence (CHAI)

Portait of Joseph Sifakis

Joseph Sifakis

Turing Award Winner

Research Director Emeritus, Verimag Lab
Université Grenoble - Alpes

Portait of Kate Crawford

Kate Crawford

TIME 100 AI

Professor
University of Southern California

Senior Principal Researcher
MSR

Portait of Wojciech Zaremba

Wojciech Zaremba

Co-founder of OpenAI

Former Research Scientist
Facebook AI Research
Former Research Scientist
Google Brain
Portait of Yanis Varoufakis

Yanis Varoufakis

Former Minister of Finance of Greece

Professor
University of Athens

Portait of Peter Norvig

Peter Norvig

Education Fellow
Stanford, Institute for Human-Center AI (HAI)

Director of Engineering
Google

Portait of George Church

George Church

Professor
Harvard Medical School & MIT

Portait of Ian Goodfellow

Ian Goodfellow

Principal Scientist
Google DeepMind

Inventor of generative adversarial networks

Founder of Google Brain's machine learning security research team

Portait of Jakub Pachocki

Jakub Pachocki

Chief Scientist
OpenAI
Portait of Andrew Chi-Chih Yao

Andrew Chi-Chih Yao

Turing Award Winner

Professor
Tsinghua University

Portait of Gillian Hadfield

Gillian Hadfield

Bloomberg Distinguished Professor of AI Alignment and Governance
Johns Hopkins University
Portait of Sir Stephen Fry

Sir Stephen Fry

Writer, Director, Actor

Portait of Yi Zeng

Yi Zeng

TIME 100 AI

Dean
Beijing Institute of AI Safety and Governance

Portait of Gustavo Béliz

Gustavo Béliz

Former Governor of Argentina

Former Minister
Government of Argentina

Former Secretary of the President
Government of Argentina

Chair
Economic and Social Council of Argentina

Portait of Baroness Beeban Kidron

Baroness Beeban Kidron

Crossbench Peer
UK House of Lords

Advisor to the Institute for Ethics in AI
Oxford University

Founder and Chair
5Rights Foundation

Portait of Gbegna Sesan

Gbegna Sesan

Executive Director
Paradigm Initiative

Portait of Anna Ascani

Anna Ascani

Vice President
Italian Chamber of Deputies

Portait of Dan Hendrycks

Dan Hendrycks

TIME 100 AI

Executive Director
Center for AI Safety

Advisor
xAI

Advisor
Scale AI

Portait of Dawn Song

Dawn Song

Professor
University of California, Berkeley

Portait of Gary Marcus

Gary Marcus

Professor Emeritus
New York University

Portait of Audrey Tang

Audrey Tang

Taiwan's Cyber Ambassador and First Digital Minister
TIME 100 AI

Senior Accelerator Fellow, Institute for Ethics in AI
University of Oxford

Portait of Maria João Rodrigues

Maria João Rodrigues

Former Portuguese Minister of Employment

Former Member of the European Parliament 

President of the Foundation for European Progressive Studies (FEPS)
Portait of Rachel Adams

Rachel Adams

Founding CEO
Global Center on AI Governance

Portait of Adetola A.Salau

Adetola A.Salau

Special Adviser to the Honourable Minister of Education
Federal Ministry of Education Nigeria

Portait of Thibaut Bruttin

Thibaut Bruttin

Director General
Reporters Without Borders (RSF)

Portait of Maria Chiara Carrozza

Maria Chiara Carrozza

Former Italian Minister of Education, University and Research

Full Professor of Biomedical Engineering and Biorobotics
University of Milano-Bicocca

Portait of Daniel Kokotajlo

Daniel Kokotajlo

TIME 100 AI

Former researcher
OpenAI

Co-founder and Lead
AI Futures Project

Portait of Lord Tim Clement-Jones

Lord Tim Clement-Jones

Peer / Science Innovation and Technology Spokesperson
UK House of Lords

Portait of Xue Lan

Xue Lan

TIME 100 AI

Dean, Schwarzman College
Tsinghua University

Portait of Nicolas Miailhe

Nicolas Miailhe

Founder and Non-Executive Chairman
The Future Society

Co-founder and CEO
PRISM Eval

Portait of Marc Rotenberg

Marc Rotenberg

Executive Director and Founder
Center for AI and Digital Policy

Portait of Laurence Devillers

Laurence Devillers

Knight of the Legion of Honour

Professor of AI
Sorbonne University/CNRS

President
Blaise Pascal Foundation

Portait of Jason Clinton

Jason Clinton

Chief Information Security Officer
Anthropic

Portait of Jean Jouzel

Jean Jouzel

Nobel Peace Prize co-Recipient as a Vice-Chair of the IPCC

Emeritus Director of Research
French Alternative Energies and Atomic Energy Commission (CEA)

Former Vice President
Intergovernmental Panel on Climate Change (IPCC)

Portait of Riccardo Valentini

Riccardo Valentini

Nobel Peace Prize as a member of the IPCC board

Professor
University of Tuscia

Portait of Robert Trager

Robert Trager

Director
Oxford Martin AI Governance Initiative, University of Oxford

Portait of Brice Lalonde

Brice Lalonde

Former French Minister of the Environment

Former advisor on Sustainable Development to the UN Global Compact

Portait of Senator Scott Wiener

Senator Scott Wiener

California State Senator
Author of California AI safety legislation

Portait of Sören Mindermann

Sören Mindermann

Scientific Lead
International AI Safety Report

Portait of Brando Benifei

Brando Benifei

Member of the European Parliament

European AI Act co-Rapporteur

Portait of Miguel Luengo-Oroz

Miguel Luengo-Oroz

CEO
Spotlab.ai

Former first Chief Data Scientist
United Nations

Professor
Universidad Politécnica de Madrid

Portait of Sergey Lagodinsky

Sergey Lagodinsky

Member of the European Parliament

Vice-Chair
Greens/EFA Group

Shadow Rapporteur on the AI Act
Member of the AI Act Implementation Working Group in the European Parliament

Portait of Max Tegmark

Max Tegmark

TIME 100 AI

Professor
MIT

President
Future of Life Institute

Portait of Sneha Revanur

Sneha Revanur

TIME 100 AI

Founder/President
Encode
Portait of Roman Yampolskiy

Roman Yampolskiy

Professor of Computer Science
University of Louisville

Portait of Mark Nitzberg

Mark Nitzberg

Executive Director
CHAI

Interim Executive Director
IASEAI

Organizer

Portait of Niki Iliadis

Niki Iliadis

Director, Global AI Governance
The Future Society

Lead Organizer

Portait of Charbel-Raphaël Segerie

Charbel-Raphaël Segerie

Executive Director
French Center for AI Safety (CeSIA)

Lead Organizer

Signatory Organizations

10Billion Logo for 10Billion
AI Governance and Safety Canada Logo for AI Governance and Safety Canada
AI Governance and Safety Canada
AI Safety Connect Logo for AI Safety Connect
AI Safety Initiative at Georgia Tech Logo for AI Safety Initiative at Georgia Tech
AI Safety Initiative at Georgia Tech
AI Safety Turkey Logo for AI Safety Turkey
AiXist | Consortium for AI & Existential Risks Logo for AiXist | Consortium for AI & Existential Risks
AiXist | Consortium for AI & Existential Risks
Artificial Intelligence International Institute (AIII) Logo for Artificial Intelligence International Institute (AIII)
Beijing Academy of Artificial Intelligence Logo for Beijing Academy of Artificial Intelligence
Beijing Academy of Artificial Intelligence
Bangladesh NGOs Network for Radio and Communication Logo for Bangladesh NGOs Network for Radio and Communication
Beijing Institute of AI Safety and Governance (Beijing-AISI) Logo for Beijing Institute of AI Safety and Governance (Beijing-AISI)
Beijing Institute of AI Safety and Governance (Beijing-AISI)
C Minds Logo for C Minds
Center for AI Risk Management and Alignment (CARMA) Logo for Center for AI Risk Management and Alignment (CARMA)
Center for AI Safety Logo for Center for AI Safety
Center for Existential Safety Logo for Center for Existential Safety
Center for Existential Safety
Center for Leadership Equity and Research (CLEAR) Logo for Center for Leadership Equity and Research (CLEAR)
Center for Media Research - Nepal Logo for Center for Media Research - Nepal
Center for Media Research - Nepal
Centre for Information Technology and Development (CITAD) Logo for Centre for Information Technology and Development (CITAD)
ChildSafeNet Logo for ChildSafeNet
CIVICAi Logo for CIVICAi
Climate Policy Radar Logo for Climate Policy Radar
Conjecture Logo for Conjecture
Connected by Data Logo for Connected by Data
Connected by Data
CyberPeace Institute Logo for CyberPeace Institute
Demos Logo for Demos
Digihumanism - Centre for AI & Digital Humanism Logo for Digihumanism - Centre for AI & Digital Humanism
Digihumanism - Centre for AI & Digital Humanism
Digital Futures Lab Logo for Digital Futures Lab
Encode Logo for Encode
Ente Nazionale per L'Intelligenza Artificiale (ENIA) Logo for Ente Nazionale per L'Intelligenza Artificiale (ENIA)
Ente Nazionale per L'Intelligenza Artificiale (ENIA)
European Responsible Artificial Intelligence Office (EURAIO) Logo for European Responsible Artificial Intelligence Office (EURAIO)
Everyone.ai Logo for Everyone.ai
FAR.AI Logo for FAR.AI
Foundation for European Progressive Studies Logo for Foundation for European Progressive Studies
Foundation for European Progressive Studies
Fundación Gabo (Gabriel García Márquez Foundation) Logo for Fundación Gabo (Gabriel García Márquez Foundation)
Future Shift Labs Logo for Future Shift Labs
Future Shift Labs
Draft&Goal Logo for Draft&Goal
Gambia Participates Logo for Gambia Participates
Global Shield Logo for Global Shield
Globethics Logo for Globethics
Indian Society of Artificial Intelligence and Law Logo for Indian Society of Artificial Intelligence and Law
Laboratory of Public Policy and Internet - LAPIN Logo for Laboratory of Public Policy and Internet - LAPIN
Laboratory of Public Policy and Internet - LAPIN
Make.org Logo for Make.org
Migam S.A. Logo for Migam S.A.
ML4Good Logo for ML4Good
Paradigm Initiative Logo for Paradigm Initiative
PauseAI Logo for PauseAI
Printemps numérique Logo for Printemps numérique
Safe AI Fund (SAIF) Logo for Safe AI Fund (SAIF)
Safe AI Fund (SAIF)
Safe AI Lausanne Logo for Safe AI Lausanne
Safe AI Lausanne
Seldon Lab Logo for Seldon Lab
Seldon Lab
Spotlab.ai Logo for Spotlab.ai
Taiwan AI Labs & Foundation Logo for Taiwan AI Labs & Foundation
Tech Hive Advisory Logo for Tech Hive Advisory
The AI Whistleblower Initiative Logo for The AI Whistleblower Initiative
The Centre for Responsible Leadership Logo for The Centre for Responsible Leadership
The Centre for Responsible Leadership
The European Network for AI Safety Logo for The European Network for AI Safety
The Flares Logo for The Flares
The Flares
The Inside View Logo for The Inside View
The Midas Project Logo for The Midas Project
The Millenium Project Logo for The Millenium Project
The Safe AI for Children Alliance Logo for The Safe AI for Children Alliance
The Safe AI for Children Alliance
TIC Council Logo for TIC Council
University of Chicago Existential Risk Laboratory Logo for University of Chicago Existential Risk Laboratory
Worldwide Alliance for AI & Democracy Logo for Worldwide Alliance for AI & Democracy
Effective Altruism Sweden Logo for Effective Altruism Sweden
Effective Altruism Sweden
Z.ai Logo for Z.ai
Z.ai
The Human Ai Institute Logo for The Human Ai Institute
Wisconsin AI Safety Initiative Logo for Wisconsin AI Safety Initiative
Wisconsin AI Safety Initiative
Peace Movement Aotearoa Logo for Peace Movement Aotearoa
bluenove SAS Logo for bluenove SAS
Center for Digital Democracy Logo for Center for Digital Democracy
Center for Digital Democracy
ALLAI Logo for ALLAI
ALLAI
AI Pathfinder Logo for AI Pathfinder
AI Pathfinder
Shanghai AI Research Institute Logo for Shanghai AI Research Institute
VerifyWise Logo for VerifyWise
VerifyWise
Manushya Foundation Logo for Manushya Foundation
Democracy Without Borders Logo for Democracy Without Borders
Democracy Without Borders
Africa Tech for Development Initiative Logo for Africa Tech for Development Initiative
Africa Tech for Development Initiative
SETESCA Logo for SETESCA
SETESCA
Tournesol Association Logo for Tournesol Association
Tournesol Association
eScire Logo for eScire
ECOFACT AG Logo for ECOFACT AG
XR4heritage Logo for XR4heritage
ALGOR ASSOCIATION Logo for ALGOR ASSOCIATION
NTEN Logo for NTEN
AI Edutainment GmbH Logo for AI Edutainment GmbH
AI Edutainment GmbH
Apart Research Logo for Apart Research
Apart Research
Intersticia Logo for Intersticia
Intersticia
Exponential Science Foundation Logo for Exponential Science Foundation
Exponential Science Foundation
Women in AI & Robotics Logo for Women in AI & Robotics
Women in AI & Robotics
Centre for Future Generations Logo for Centre for Future Generations
Bangladesh NGOs Network for Radio and Communication Logo for Bangladesh NGOs Network for Radio and Communication
Mieux Donner Logo for Mieux Donner
Cremicro Digital Marketing Agency Logo for Cremicro Digital Marketing Agency
International Network Against Cyber Hate (INACH) Logo for International Network Against Cyber Hate (INACH)
AI Safety Vietnam Logo for AI Safety Vietnam
AI Safety Vietnam
Crafting Tomorrow Logo for Crafting Tomorrow
World Usability Day Logo for World Usability Day
World Usability Day
Global Catastrophic Risk Institute Logo for Global Catastrophic Risk Institute
Global Catastrophic Risk Institute
Data Literacy Association Logo for Data Literacy Association
Data Literacy Association
Affinidi Logo for Affinidi
YOUTH SPI Logo for YOUTH SPI
YOUTH SPI

Sign the call

We invite individuals and organizations of significant standing or achievement to complete this form to endorse the call.

You are signing as:

Sign as an individual

All submissions are reviewed by our team before publication to ensure the integrity of this global call. A confirmation email will follow for verification, which can take up to 48 hours.

Sign as an organization

All submissions are reviewed by our team before publication to ensure the integrity of this global call. A confirmation email will follow for verification, which can take up to 48 hours.

Frequently Asked Questions

Note, these FAQ responses may not capture every signatory's individual views.

What are red lines in the context of AI?
AI red lines are specific prohibitions on AI uses or behaviors that are deemed too dangerous to permit under any circumstances. They are limits, agreed upon internationally, to prevent AI from causing universally unacceptable risks.
Why are international AI red lines important?
International AI red lines are critical because they establish clear prohibitions on the development, deployment, and use of systems that pose unacceptable risks to humanity. They are:
  • Urgent: Their primary purpose is to prevent the most severe and potentially irreversible harms for humanity and global stability. 
  • Feasible: Red lines represent the lowest common denominator on which states can agree. Even governments divided by economic or geopolitical rivalries share a common interest in avoiding disasters that would transcend their borders. 
  • Widely Supported: Major AI companies have already acknowledged the need for red lines, including at the AI Seoul Summit 2024. Top Scientists from the US and China have already asked for specific red lines, and this is the most widely supported measure by research institutes, think tanks, and independent organizations.
In short, red lines are the most practical step the global community can take now to prevent severe risks while allowing safe innovation to continue.
Can you give concrete examples of possible red lines?
The red lines could focus either on AI behaviors (i.e., what the AI systems can do) or on AI uses (i.e., how humans and organizations are allowed to use such systems). The following examples show the kind of boundaries that can command broad international consensus.

Note that the campaign does not focus on endorsing any specific red lines. Their specific definition and clarification will have to be the result of scientific-diplomatic dialogues.

Examples of red lines on AI uses:
  • Nuclear command and control: Prohibiting the delegation of nuclear launch authority, or critical command-and-control decisions, to AI systems (a principle already agreed upon by the US and China).
  • Lethal Autonomous Weapons: Prohibiting the deployment and use of weapon systems used for killing a human without meaningful human control and clear human accountability.
  • Mass surveillance: Prohibiting the use of AI systems for social scoring and mass surveillance (adopted by all 193 UNESCO member states).
  • Human impersonation: Prohibiting the use and deployment of AI systems that deceive users into believing they are interacting with a human without disclosing their AI nature.

Examples of red line on AI behaviors:
  • Cyber malicious use: Prohibiting the uncontrolled release of cyberoffensive agents capable of disrupting critical infrastructure.
  • Weapons of mass destruction: Prohibiting the deployment of AI systems that facilitate the development of weapons of mass destruction or that violate the Biological and Chemical Weapons Conventions.
  • Autonomous self-replication: Prohibiting the development and deployment of AI systems capable of replicating or significantly improving themselves without explicit human authorization (Consensus from high-level Chinese and US Scientists).
  • The termination principle: Prohibiting the development of AI systems that cannot be immediately terminated if meaningful human control over them is lost (based on the Universal Guidelines for AI).

Red lines on AI behaviors have already started being operationalized in the Safety and Security frameworks from AI companies such as Anthropic’s Responsible Scaling Policy, OpenAI’s Preparedness Framework, and DeepMind’s Frontier Safety Framework. For example, for AI models above a critical level of cyber-offense capability, OpenAI states that “Until we have specified safeguards and security controls standards that would meet a critical standard, halt further development.” Definitions of critical capabilities that require robust mitigations would need to be harmonized and strengthened between those different companies.
Are international AI red lines even possible?
Yes, history shows that international cooperation on high-stakes risks is entirely achievable. When the cost of inaction is too catastrophic, humanity has consistently come together to establish binding rules to prevent global disasters or profound harms to humanity and global stability. 

The Treaty on the Non-Proliferation of Nuclear Weapons (1970) and the Biological Weapons Convention (1975) were negotiated and ratified at the height of the Cold War, proving that cooperation is possible despite mutual distrust and hostility. The Montreal Protocol (1987) averted a global environmental catastrophe by phasing out ozone-depleting substances, and the UN Declaration on Human Cloning (2005) established a crucial global norm to safeguard human dignity from the potential harms of reproductive cloning. Most recently, the High Seas Treaty (2025) provided a comprehensive set of regulations for high seas conservation and serves as a sign of optimism for international diplomacy.

In the face of global, irreversible threats that know no borders, international cooperation is the most rational form of national self-interest.
Are we starting from scratch?
No. Red lines on AI already exist and are gaining momentum. Some examples include:
  • Global Norms and Principles: The UNESCO Recommendation on the Ethics of AI (2021), adopted by all 193 member states, explicitly calls for prohibiting the use of AI systems for social scoring and mass surveillance. 
  • Binding Legal Frameworks: The Council of Europe's Framework Convention on AI is the first-ever international treaty on the subject, establishing binding rules for its signatories to ensure AI systems are compatible with human rights, democracy, and the rule of law. The EU AI Act creates an “unacceptable risk” tier for applications that are strictly banned within the European Union.
  • National Policies: America’s AI Action Plan explicitly calls for the creation of an evaluation ecosystem, upon which we can set thresholds. 
  • US-China Bilateral Dialogue: In 2024, both Heads of State agreed that AI should never make decisions over the use of nuclear weapons and that humans need to remain in control. 
  • Scientific Consensus: The Universal Guidelines for AI (2018), supported by hundreds of experts and dozens of civil society organizations, establishes a clear “Termination Principle”, an obligation to shut down any AI system if meaningful human control can no longer be ensured. The IDAIS Beijing Statement on AI Safety (2024), a consensus from leading international experts, explicitly calls for several red lines, such as restricting AI systems that can autonomously replicate or evolve without human oversight.
  • Industry Commitments: At the AI Seoul Summit, leading AI companies made the Seoul Commitments, formally pledging to “set out thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable”. Several leading companies have also adopted internal governance frameworks designed to restrict AI deployment or development and implement safeguards if specific high-risk capabilities are discovered.
Our call is to build upon the principles and precedents set by these powerful efforts, forging a coherent, binding, and universally adopted international framework. A further discussion of the existing list of emerging red lines is available here
Who would enforce these red lines?
There is no single global authority for AI, so enforcement would likely combine different levels of governance, including:
  1. International Treaty: A binding agreement would harmonize rules across countries. This prevents a regulatory arbitrage "race to the bottom" where companies could evade regulations by moving to less strict jurisdictions.
  2. National Governments: Nations would be tasked with translating the international agreement into domestic law. This would involve creating regulatory bodies to license advanced AI systems, conduct mandatory safety audits, and impose severe penalties for violations within their jurisdictions.
  3. International Technical Verification Body: An impartial international body, modeled on organizations like the International Atomic Energy Agency (IAEA), could develop standardized auditing protocols and independently verify that AI systems from any company or country comply with the agreed-upon red lines. The International Network of AI Safety and Security Institutes is well-positioned to play a role in this process.
Why 2026?
Waiting longer could mean less room, both technically and politically, for effective intervention, while the likelihood of cross-border harm increases sharply. That is why 2026 must be the year the world acts.

The pace of AI development means that risks once seen as speculative are already emerging — including biological misuse risks (Williams et al, 2025), systems showing deceptive behavior, and even resistance to control (Greenblatt et al., 2024). In their own assessments of the biological misuse potential, leading AI companies place their newest frontier models on a medium (Anthropic, 2025) to high (OpenAI, 2025) risk spectrum.

AI’s coding capabilities are also improving rapidly, meaning superhuman programming ability may soon be possible, accelerating AI progress even further. According to recent safety evaluations (Google DeepMind, 2024), experts forecast that AIs could become capable of autonomously replicating and proliferating on the internet as early as late 2025, with the median forecast landing in 2027.
What should the next steps be?
Launched ahead of the 80th session of the UN General Assembly, the campaign seeks to encourage diplomatic action toward concrete pathways for international agreements on red lines for AI.

Several complementary pathways could be envisaged: 
  • A group of pioneering countries or a “coalition of the willing,” potentially drawing on countries already engaged in the G7 Hiroshima AI process, could advance the concept of AI red lines across the G7, G20, and BRICS agendas. 
  • The newly established UN Independent Scientific Panel on AI could publish a thematic brief articulating scientific consensus on technically clear and verifiable red lines, with technical contributions from the OECD.
  • Building on this groundwork, states could use the AI Impact Summit in India in February 2026 to endorse initial red lines for AI. Such red lines could build on the Seoul Commitments by translating voluntary corporate pledges into shared risk thresholds that, in turn, could be embedded in national regulation and public procurement.
  • The UN Global Dialogue on AI Governance could lead a global consultation with scientists, civil society, and industry to define a set of clear and verifiable red lines, to be summarized in an outcome document at the July 2026 Dialogue in Geneva. 
  • By the end of 2026, either (i) a UN General Assembly resolution could be initiated, noting and welcoming these red lines and inviting negotiations, or (ii) a joint ministerial statement by an alliance of willing states could launch negotiations for a binding international treaty. 
Any future treaty should be built on three pillars: a clear list of prohibitions; robust, auditable verification mechanisms; and the appointment of an independent body established by the Parties to oversee implementation.

Media Inquiries

The Global Call for AI Red Lines is a civil society-led initiative, managed by the French Center for AI Safety (CeSIA), and The Future Society.

For media inquiries, please contact us at media@red-lines.ai. For any other questions, contact us at contact@red-lines.ai.