关于人工智能“红线”的全球呼吁
一些先进的人工智能系统已经表现出欺骗性和有害行为,但这些系统却正被赋予越来越多的自主权,可以在现实世界中采取行动并作出决策。如果任其发展,包括部分开发前沿的专家在内的许多人警告,在未来几年中要维持对人工智能的有效人类控制将变得越来越困难。
各国政府必须果断采取行动,否则有意义的干预窗口将会关闭。为防止普遍不可接受的风险,必须达成一项关于人工智能“红线”的国际协议,并确保这些红线清晰、可验证。这些红线应当建立在现有的全球框架和企业自愿承诺基础上加以强化和落实,确保所有先进的人工智能提供方都遵守共同的门槛标准并承担责任。
各国政府必须果断采取行动,否则有意义的干预窗口将会关闭。为防止普遍不可接受的风险,必须达成一项关于人工智能“红线”的国际协议,并确保这些红线清晰、可验证。这些红线应当建立在现有的全球框架和企业自愿承诺基础上加以强化和落实,确保所有先进的人工智能提供方都遵守共同的门槛标准并承担责任。
What Signatories Say
It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.
Ahmet ÜzümcüFormer Director General of the Organization for the Prohibition of Chemical Weapons
For thousands of years, humans have learned—sometimes the hard way—that powerful technologies can have dangerous as well as beneficial consequences. With AI, we may not get a chance to learn from our mistakes, because AI is the first technology that can make decisions by itself, invent new ideas by itself, and escape our control. Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.
Yuval Noah HarariAuthor of "Sapiens"
Humanity in its long history has never met intelligence higher than ours. Within a few years, we will. But we are far from being prepared for it in terms of regulations, safeguards, and governance.
Csaba KőrösiFormer President of the UN General Assembly
The development of highly capable AI could be the most significant event in human history. It is imperative that world powers act decisively to ensure it is not the last.
Stuart RussellDistinguished Professor of Computer Science at the University of California, Berkeley
The current race towards ever more capable and autonomous AI systems poses major risks to our societies and we urgently need international collaboration to address them. Establishing red lines is a crucial step towards preventing unacceptable AI risks.
Yoshua Bengio2018 Turing Award Winner
This should be a major wake up call for policy makers and AI developers.
Lord Tim Clement-JonesHouse of Lords' Science Innovation and Technology Spokesperson
Without AI safeguards, we may soon face epistemic chaos, engineered pandemics, and systematic human rights violation. History teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.
Maria RessaNobel Peace Prize Laureate
Signatories


Joseph Stiglitz
Nobel Laureate in Economics
Professor of Finance and Business
Columbia University


Juan Manuel Santos
Former President of Colombia
Nobel Peace Prize Laureate
Chair
The Elders


Maria Ressa
Nobel Peace Prize Laureate
Co-founder and CEO
Rappler


Daron Acemoğlu
Nobel Laureate in Economics
Institute Professor
MIT


Mary Robinson
First Woman President of Ireland
Former UN High Commissioner for Human Rights
Member
The Elders


Ahmet Üzümcü
Nobel Peace Prize Recipient on Behalf of the OPCW
Former Director General
Organization for the Prohibition of Chemical Weapons (OPCW)
Senior Fellow
European Leadership Network (ELN)


Geoffrey Hinton
Nobel Laureate in Physics
Turing Award Winner
Emeritus Professor of Computer Science
University of Toronto


Yoshua Bengio
Most Cited Living Scientist
Turing Award Winner
Full Professor
Université de Montréal
Chair
International Scientific Report on the Safety of Advanced AI
Co-President and Scientific Director
LawZero
Founder and Scientific Advisor
Mila – Quebec AI Institute


John Hopfield
Nobel Laureate in Physics
Emeritus Professor
Princeton University


Yuval Noah Harari
Author of 'Sapiens'
Professor of History
Hebrew University of Jerusalem


Jennifer Doudna
Nobel Laureate in Chemistry
Professor
University of California, Berkeley
Co-developer of CRISPR-Cas9, gene-editing tool


Enrico Letta
Former Prime Minister of Italy
President
Agenzia di Ricerche e Legislazione (AREL)


Csaba Kőrösi
77th President of the UN General Assembly
Strategic Director
Blue Planet Foundation


Giorgio Parisi
Nobel Laureate in Physics
Emeritus Professor
University of Rome La Sapienza


Sir Oliver Hart
Nobel Laureate in Economics
Professor
Harvard University


Ya-Qin Zhang
Chair Professor and Dean, Institute for AI Industry Research
Tsinghua University
Former President
Baidu


Stuart Russell
Professor and Smith-Zadeh Chair in Engineering
University of California, Berkeley
Founder
Center for Human-Compatible Artificial Intelligence (CHAI)


Joseph Sifakis
Turing Award Winner
Research Director Emeritus, Verimag Lab
Université Grenoble - Alpes


Kate Crawford
TIME 100 AI
Professor
University of Southern California
Senior Principal Researcher
MSR


Wojciech Zaremba
Co-founder of OpenAI
Facebook AI Research
Google Brain


Yanis Varoufakis
Former Minister of Finance of Greece
Professor
University of Athens


Peter Norvig
Education Fellow
Stanford, Institute for Human-Center AI (HAI)
Director of Engineering
Google


George Church
Professor
Harvard Medical School & MIT


Ian Goodfellow
Principal Scientist
Google DeepMind
Inventor of generative adversarial networks
Founder of Google Brain's machine learning security research team


Andrew Chi-Chih Yao
Turing Award Winner
Professor
Tsinghua University


Sir Stephen Fry
Writer, Director, Actor


Yi Zeng
TIME 100 AI
Dean
Beijing Institute of AI Safety and Governance


Gustavo Béliz
Former Governor of Argentina
Former Minister
Government of Argentina
Former Secretary of the President
Government of Argentina
Chair
Economic and Social Council of Argentina


Baroness Beeban Kidron
Crossbench Peer
UK House of Lords
Advisor to the Institute for Ethics in AI
Oxford University
Founder and Chair
5Rights Foundation


Gbegna Sesan
Executive Director
Paradigm Initiative


Anna Ascani
Vice President
Italian Chamber of Deputies


Dan Hendrycks
TIME 100 AI
Executive Director
Center for AI Safety
Advisor
xAI
Advisor
Scale AI


Dawn Song
Professor
University of California, Berkeley


Gary Marcus
Professor Emeritus
New York University


Audrey Tang
Taiwan's Cyber Ambassador and First Digital Minister
TIME 100 AI
Senior Accelerator Fellow, Institute for Ethics in AI
University of Oxford


Maria João Rodrigues
Former Member of the European Parliament
President of the Foundation for European Progressive Studies (FEPS)


Rachel Adams
Founding CEO
Global Center on AI Governance


Adetola A.Salau
Special Adviser to the Honourable Minister of Education
Federal Ministry of Education Nigeria


Thibaut Bruttin
Director General
Reporters Without Borders (RSF)


Maria Chiara Carrozza
Former Italian Minister of Education, University and Research
Full Professor of Biomedical Engineering and Biorobotics
University of Milano-Bicocca


Daniel Kokotajlo
TIME 100 AI
Former researcher
OpenAI
Co-founder and Lead
AI Futures Project


Lord Tim Clement-Jones
Peer / Science Innovation and Technology Spokesperson
UK House of Lords


Xue Lan
TIME 100 AI
Dean, Schwarzman College
Tsinghua University


Marc Rotenberg
Executive Director and Founder
Center for AI and Digital Policy


Laurence Devillers
Knight of the Legion of Honour
Professor of AI
Sorbonne University/CNRS
President
Blaise Pascal Foundation


Jason Clinton
Chief Information Security Officer
Anthropic


Jean Jouzel
Nobel Peace Prize co-Recipient as a Vice-Chair of the IPCC
Emeritus Director of Research
French Alternative Energies and Atomic Energy Commission (CEA)
Former Vice President
Intergovernmental Panel on Climate Change (IPCC)


Brice Lalonde
Former French Minister of the Environment
Former advisor on Sustainable Development to the UN Global Compact


Sören Mindermann
Scientific Lead
International AI Safety Report


Mark Nitzberg
Executive Director
CHAI
Interim Executive Director
IASEAI
Organizer


Niki Iliadis
Director, Global AI Governance
The Future Society
Lead Organizer


Charbel-Raphaël Segerie
Executive Director
French Center for AI Safety (CeSIA)
Lead Organizer
Show all signatories Show less signatories
Roman Yampolskiy
Professor of Computer Science
University of Louisville
Tom Schaul
Senior Staff Scientist
Google DeepMind
Merve Hickok
President
Center for AI and Digital Policy
Huang Tiejun
Chairman
Beijing Academy of Artificial Intelligence
Senator Scott Wiener
California State Senator
Author of California AI safety legislation
Xianyuan Zhan
Associate Professor
Tsinghua University
Robert Trager
Director
Oxford Martin AI Governance Initiative, University of Oxford
Sheila McIlraith
Professor
University of Toronto
Associate Director
Schwartz Reisman Institute for Technology and Society
Max Tegmark
TIME 100 AI
Professor
MIT
President
Future of Life Institute
Victoria Krakovna
Research Scientist
Google DeepMind
Co-Founder
Future of Life Institute
Zachary Kenton
Staff Research Scientist
Google DeepMind
Lead of Google DeepMind's AGI Safety and Alignment, Scalable Oversight subteam
Brando Benifei
Member of the European Parliament
European AI Act co-Rapporteur
Liang Zheng
Vice Dean
Institute for AI International Governance of Tsinghua University
Edith Elkind
Professor of Computer Science
Northwestern University
Huw Price
TIME 100 AI
Emeritus Bertrand Russell Professor
Trinity College, University of Cambridge
Fellow of the British Academy
Fellow of the Australian Academy of the Humanities
Sergey Lagodinsky
Member of the European Parliament
Vice-Chair
Greens/EFA Group
Shadow Rapporteur on the AI Act
Member of the AI Act Implementation Working Group in the European Parliament
Toby Ord
Senior Researcher
University of Oxford
Founder
Giving What We Can
Board Member
Centre for AI Governance
Miguel Luengo-Oroz
CEO
Spotlab.ai
Former first Chief Data Scientist
United Nations
Professor
Universidad Politécnica de Madrid
Alexander Turner
Research scientist, AGI alignment
Google DeepMind
Steve Omohundro
Founder
Beneficial AI Research
Mevan Babakar
Former Product Strategy Lead
Google
Gaétan Marceau Caron
Senior Director
Mila - Quebec AI Institute
Seán Ó hÉigeartaigh
Associate Director
Leverhulme Center for the Future of Intelligence
Professor
University of Cambridge
Scott Aaronson
Schlumberger Professor of Computer Science
University of Texas at Austin
Xerxes Dotiwalla
Product Manager, AGI Safety and Alignment
Google DeepMind
Mary Phuong
Research Scientist
Google DeepMind
Evan Hubinger
Alignment Stress-Testing Team Lead
Anthropic
Robert O'Callahan
Senior Staff Software Engineer
Google DeepMind
Juan Felipe Cerón Uribe
Research Engineer - Safety systems
OpenAI
Zheng Liang
Vice Dean
Institute for AI International Governance of Tsinghua University
Anya Schiffrin
Senior Lecturer in Discipline of International and Public Affairs
Columbia University School of International and Public Affairs
Joel Christoph
Founder
10 Billion
Andrea Miotti
Founder and CEO
ControlAI
Author
A Narrow Path; The Compendium
Connor Leahy
CEO
Conjecture
Karine Caunes
Executive Director
Digihumanism - Centre for AI & Digital Humanism
Jakob Foerster
Associate Professor
University of Oxford
Darren McKee
Podcast Author
The Reality Check
Marta Ziosi
Postdoctoral researcher and AI Best Practices Lead
Oxford Martin AI Governance Initiative
Vice-chair
EU GPAI Code of Practice
Scientific Committee member
Association of AI Ethicists
Matthias Samwald
Associate Professor, Institute of Artificial Intelligence
Medical University of Vienna
Co-Chair
EU General-Purpose AI Code of Practice
Luke Muehlhauser
Program Director, AI Governance and Policy
Open Philanthropy
Scott Alexander
Writer
Astral Codex Ten
Paul S. Rosenbloom
Professor Emeritus of Computer Science
University of Southern California
Former Director for Cognitive Architecture Research
University of Southern California’s Institute for Creative Technoloies
Paul Nemitz
Visiting Professor of Law
College of Europe
Chair of the Arthur Langerman Foundation
Technical University of Berlin
Tan Zhi Xuan
Assistant Professor
National University of Singapore
Edson Prestes
Member of UN Secretary-General's High Level Panel on Digital Cooperation
Full Professor
Federal University of Rio Grande do Sul, Brazil
Nick Moës
Executive Director
The Future Society
Adam Gleave
Co-Founder and CEO
FAR.AI
Rif A. Saurous
Principal Research Scientist
Google
Ziyue Wang
Research Engineer
Google DeepMind
Fabien Roger
Member of Technical Staff
Anthropic
Euan Ong
Member of Technical Staff
Anthropic
Leo Gao
Member of Technical Staff
OpenAI
Johannes Gasteiger
Member of Technical Staff
Anthropic
Kshitij Sachan
Member of Technical Staff
Anthropic
Nicolas Miailhe
Founder and Non-Executive Chairman
The Future Society
Co-founder and CEO
PRISM Eval
Daniel M. Ziegler
Member of Technical Staff
Anthropic
Mike Lambert
Member of Technical Staff
Anthropic
Abbas Mehrabian
Senior Research Scientist
Google DeepMind
Grace Akinfoyewa
Director of Science and Technology
Lagos State Ministry of Education
Jonathan Richens
Research Scientist, AGI safety
Google DeepMind
Lev Reyzin
Professor of Mathematics, Statistics & Computer Science
University of Illinois Chicago
Director
NSF IDEAL Data Science Institute
Editor-in-Chief
Mathematics of Data, Learning, and Intelligence
Maria Loks-Thompson
Software Engineer
Google DeepMind
Alexander Zacherl
Independent Designer
Past Vice Chair
EU AI Act Code of Practice
Past Technical Staff
UK AI Safety Institute
Vincent Conitzer
Director & Professor
Foundations of Cooperative AI Lab
Head of Technical AI Engagement
Institute for Ethics in AIm Carnegie Mellon University and University of Oxford
Swante Scholz
Software Engineer
Google DeepMind
Ed Tsoi
CEO
AI Safety Asia
Siméon Campos
Founder, Executive Director
SaferAI
Cyrus Hodes
Founder
AI Safety Connect
Otto Barten
Founder
Existential Risk Observatory
Michaël Trazzi
CEO
The Inside View
Alexander Gonzalez Flor
Professor Emeritus, Faculty of Information and Communication Studies
University of the Philippines (Open University)
David Krueger
Assistant Professor
University of Montreal, Mila
Cihang Xie
Assistant Professor
UC Santa Cruz
Anne-Sophie Seret
Executive Director
everyone.AI
Co-lead
iRAISE Alliance
Daniel Cuthbert
Global Head of Cyber Security Research
Santander
Co-chair UK Government Cyber Advisory Board
Blackhat Review Board
Francesc Giralt
Narcís Monturiol Medal Recipient
Professor Emeritus and Ad Honorem
URV - CIVICAi
Distinguished Professor, Vice President
CIVICAi
Mathilde Cerioli
Chief Scientist
everyone.AI
Jie Tang
Founder
Z.ai, GLM, ChatGLM
Elizabeth Seger
Associate Director
Digital Policy and AI, Demos
Jennifer Waldern
Data Scientist
Microsoft
Claire Boine
Postdoctoral Scholar
Washington University in St Louis
Yang-Hui He
Fellow and Professor, London Institute for Mathematical Sciences & Merton College
University of Oxford
Laura Caroli
Senior Fellow
Center for Strategic and International Studies (CSIS)
EU AI Act negotiator
Elmira Bayrasli
CEO
Interruptrr
Eden Lin
Associate Professor of Philosophy
The Ohio State University
Henry Papadatos
Managing Director
SaferAI
Francesca Bria
Professor of Innovation
Institute of Innovation, UCL
Jinwoo Shin
Professor
Korea Advanced Institute of Science and Technology (KAIST) and ICT endowed Chair Professor
Jared Brown
Executive Director
Global Shield
Philip Torr
Professor
University of Oxford
Jerome C. Glenn
CEO
The Millenium Project
Pierre Baldi
Distinguished Professor
University of California, Irvine
Jessica Newman
Director
AI Security Initiative, UC Berkeley
Jonathan Cefalu
Chairman
Preamble AI
Discoverer of the first prompt injection attack
Jordan Crandall
Professor of Visual Arts
University of California, San Diego
Nell Watson
President
European Responsible Artificial Intelligence Office (EURAIO)
César de la Fuente
Princess of Girona and Fleming Prize Recipient
Presidential Associate Professor
University of Pennsylvania
Sloan Fellow
American Institute for Medical and Biological Engineering
Xavier Lanne
Founder
cyberethique.fr
Tolga Birdal
Professor
Imperial College London
Mehdi Benboubakeur
Co-Founder and Executive Director
Printemps numérique
Officer of the Order of the Crown of Belgium
Mounîm A. El-Yacoubi
Professor
Institut Polytechnique de Paris
Martin Lercher
Professor of Computer Science
Heinrich Heine University Düsseldorf
Board Member
Night Science Institute
Ethan Tu
Founder and Chairman
Taiwan AI Labs & Foundation
Marcin Kolasa
Senior Financial Sector Expert
International Monetary Fund (IMF)
Professor
SGH Warsaw School of Economics
Linda Bonyo
Founder and CEO
Lawyers Hub
Mamuka Matsaberidze
Professor, Department of Chemical and Biological Technologies
Faculty of Chemical Technology and Metallurgy, Georgian Technical University, Tbilisi, Georgia
Steve Kommrusch
Director of Research and Senior AI Scientist
Leela AI
Michael Wellman
Professor of Computer Science & Engineering
University of Michigan
Shane Torchiana
Chief Executive Officer
Concertiv
Steve Petersen
Professor of Philosophy
Niagara University
Tyler Johnston
Executive Director
The Midas Project
Satoshi Kurihara
Professor
Keio University
Christopher F. McKee
Professor Emeritus of Physics and of Astronomy
University of California, Berkeley
Member
National Academy of Sciences
Wyatt Tessari L'Allié
Founder
AI Governance and Safety Canada
Thibaut Giraud
PhD in Philosophy, Science Communicator
YouTube channel "Monsieur Phi"
Adam Shimi
Policy Researcher
ControlAI
Tolga Bilge
Policy Researcher
ControlAI
Brydon Eastman
Member of Technical Staff
Thinking Machines Labs
Yolanda Lannquist
Senior Advisor
U.S. and Global AI Governance, The Future Society
Mary Lang
Chief Education Justice Officer
Center for Leadership Equity and Research (CLEAR)
Gretchen Krueger
Affiliate
Berkman Klein Center for Internet & Society, Harvard University
Urvashi Aneja
Founder and Director
Digital Futures Lab
Arlindo Oliveira
Professor
Instituto Superior Técnico
Valentine Goddard
Centre for Information Technology and Development (CITAD)
Margaret Hu
Professor of Law
Digital Democracy Lab, William & Mary Law School
Chaowei Xiao
Assistant Professor
Johns Hopkins University
Teddy Nalubegba
Director
Ubuntu Center for AI
Karl Koch
Managing Director
The AI Whistleblower Initiative
Nia Gardner
Director
ML4Good
Baksa Gergely Gáspár
Director
The European Network for AI Safety
Romain Roullois
General Manager
France Deeptech
Bryan Druzin
Associate Professor of Law
The Chinese University of Hong Kong, Faculty of Law
Laurence Habib
Dean & Professor
Oslo Metropolitan University
Christopher DiCarlo
Canadian Humanist of the Year
Senior Researcher and Ethicist
Convergence Analysis
Visiting Research Scholar
Harvard
Craig Falls
Head of Quantitative Research
Jane Street Capital
Dino Pedreschi
Professor of Computer Science
University of Pisa
Joon Ho Kwak
Team Leader
Center for Trustworthy AI, Telecommunications Technology Association
Don Norman
Cofounder
Charity for Humanity-Centered Design
Member
National Academy of Engineering
Maciek Lewandowski
Public Affairs and Advocacy Advisor
Migam S.A.
Richard Mallah
Executive Director
Center for AI Risk Management & Alignment
Lucia Quirke
Member of Technical Staff
EleutherAI
Fosca Giannotti
Professor of Artificial Intelligence
Scuola Normale Superiore, Pisa, Italy
Gaetan Selle
Video Producer and Podcaster on AI Risks
The Flares
Guillem Bas
AI Policy Lead
Observatorio de Riesgos Catastróficos Globales
Isabella Duan
AI Policy Researcher
Safe AI Forum
Alix Pham
Strategic Programs Associate
Simon Institute for Longterm Governance
Jakub Growiec
Professor
SGH Warsaw School of Economics
Jan Betley
Researcher
TruthfulAI
First author of the Emergent Misalignment paper (ICML 2025 oral)
Yasuhiro Saito
Head of Innovation and Investment
Yamato Holdings
Daniela Seixas
CEO
Tonic Easy Medical
Michael Keough
Chief Operating Officer
Convergence Analysis
Shaïman Thürler
Founder
Le Futurologue
Anna Katariina Wisakanto
Senior Researcher
Center for AI Risk Management & Alignmentce
Anna Sztyber-Betley
Assistant Professor
Warsaw University of Technology
Michal Nachmany
Founder and CEO
Climate Policy Radar
Agatha Duzan
President
Safe AI Lausanne
André Brodtkorb
Professor
Oslo Metropolitan University
Anil Raghuvanshi
Founder and President
ChildSafeNet
Axel Dauchez
Founder
Worldwide Alliance for AI & Democracy
Belouali Saida
President
Afriq’AI Institute
Bridgette Ndlovu
Partnerships and Engagements Officer
Paradigm Initiative
Donny Utoyo
Founder
ICT Watch - Indonesia
Edetaen Ojo
Executive Director
Media Rights Agenda
Manel Sanromà
President
CIVICAi
Markov Grey
Author
AI Safety Atlas
Michal Kosinski
Adjunct
SGH Warsaw School of Economics
Muhammad Chaw
ICT Manager
Gambia Participates
Oleksii Molchanovskyi
Chief Innovation Officer
Ukrainian Catholic University
Paola Galvez Callirgos
AI Ethics Manager
Globethics
Przemek Kuśmierek
CEO
Migam S.A.
Scott Barrett
Lenfest-Earth Institute Professor of Natural Resource Economics
Columbia University
Robert C. Orr
Former United Nations Assistant Secretary-General
Fadi Daou
Executive Director
Globethics
Charvi Rastogi
Research Scientist
Google DeepMind
Lyantoniette Chua
Cofounder, Executive Director for Strategic Futures and Global Affairs
AI Safety Asia
Alexandre Bretel
PhD Candidate
Université Grenoble Alpes
Jacob Goldman-Wetzler
Member of Technical Staff
Anthropic
Victor Oshodi
Country Director
AAAI-Nigeria
Peter Mmbando
Executive Director
Digital Agenda for Tanzania Initiative
Organizers
Partners

















Signatory Organizations










































Sign the call
We invite individuals and organizations of significant standing or achievement to complete this form to endorse the call.
Sign as an individual
Sign as an organization
Frequently Asked Questions
Note, these FAQ responses may not capture every signatory's individual views.
What are red lines in the context of AI?
Why are international AI red lines important?
- Urgent: Their primary purpose is to prevent the most severe and potentially irreversible harms for humanity and global stability.
- Feasible: Red lines represent the lowest common denominator on which states can agree. Even governments divided by economic or geopolitical rivalries share a common interest in avoiding disasters that would transcend their borders.
- Widely Supported: Major AI companies have already acknowledged the need for red lines, including at the AI Seoul Summit 2024. Top Scientists from the US and China have already asked for specific red lines, and this is the most widely supported measure by research institutes, think tanks, and independent organizations.
Can you give concrete examples of possible red lines?
- Nuclear command and control: Prohibiting the delegation of nuclear launch authority, or critical command-and-control decisions, to AI systems (a principle already agreed upon by the US and China).
- Lethal Autonomous Weapons: Prohibiting the deployment and use of weapon systems used for killing a human without meaningful human control and clear human accountability.
- Mass surveillance: Prohibiting the use of AI systems for social scoring and mass surveillance (adopted by all 193 UNESCO member states).
- Human impersonation: Prohibiting the use and deployment of AI systems that deceive users into believing they are interacting with a human without disclosing their AI nature.
Examples of red line on AI behaviors:
- Cyber malicious use: Prohibiting the uncontrolled release of cyberoffensive agents capable of disrupting critical infrastructure.
- Weapons of mass destruction: Prohibiting the deployment of AI systems that facilitate the development of weapons of mass destruction or that violate the Biological and Chemical Weapons Conventions.
- Autonomous self-replication: Prohibiting the development and deployment of AI systems capable of replicating or significantly improving themselves without explicit human authorization (Consensus from high-level Chinese and US Scientists).
- The termination principle: Prohibiting the development of AI systems that cannot be immediately terminated if meaningful human control over them is lost (based on the Universal Guidelines for AI).
Are international AI red lines even possible?
In the face of global, irreversible threats that know no borders, international cooperation is the most rational form of national self-interest.
Are we starting from scratch?
- Global Norms and Principles: The UNESCO Recommendation on the Ethics of AI (2021), adopted by all 193 member states, explicitly calls for prohibiting the use of AI systems for social scoring and mass surveillance.
- Binding Legal Frameworks: The Council of Europe's Framework Convention on AI is the first-ever international treaty on the subject, establishing binding rules for its signatories to ensure AI systems are compatible with human rights, democracy, and the rule of law. The EU AI Act creates an “unacceptable risk” tier for applications that are strictly banned within the European Union.
- National Policies: America’s AI Action Plan explicitly calls for the creation of an evaluation ecosystem, upon which we can set thresholds.
- US-China Bilateral Dialogue: In 2024, both Heads of State agreed that AI should never make decisions over the use of nuclear weapons and that humans need to remain in control.
- Scientific Consensus: The Universal Guidelines for AI (2018), supported by hundreds of experts and dozens of civil society organizations, establishes a clear “Termination Principle”, an obligation to shut down any AI system if meaningful human control can no longer be ensured. The IDAIS Beijing Statement on AI Safety (2024), a consensus from leading international experts, explicitly calls for several red lines, such as restricting AI systems that can autonomously replicate or evolve without human oversight.
- Industry Commitments: At the AI Seoul Summit, leading AI companies made the Seoul Commitments, formally pledging to “set out thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable”. Several leading companies have also adopted internal governance frameworks designed to restrict AI deployment or development and implement safeguards if specific high-risk capabilities are discovered.
Who would enforce these red lines?
- International Treaty: A binding agreement would harmonize rules across countries. This prevents a regulatory arbitrage "race to the bottom" where companies could evade regulations by moving to less strict jurisdictions.
- National Governments: Nations would be tasked with translating the international agreement into domestic law. This would involve creating regulatory bodies to license advanced AI systems, conduct mandatory safety audits, and impose severe penalties for violations within their jurisdictions.
- International Technical Verification Body: An impartial international body, modeled on organizations like the International Atomic Energy Agency (IAEA), could develop standardized auditing protocols and independently verify that AI systems from any company or country comply with the agreed-upon red lines. The International Network of AI Safety and Security Institutes is well-positioned to play a role in this process.
Why 2026?
What should the next steps be?
Several complementary pathways could be envisaged:
- A group of pioneering countries or a “coalition of the willing,” potentially drawing on countries already engaged in the G7 Hiroshima AI process, could advance the concept of AI red lines across the G7, G20, and BRICS agendas.
- The newly established UN Independent Scientific Panel on AI could publish a thematic brief articulating scientific consensus on technically clear and verifiable red lines, with technical contributions from the OECD.
- Building on this groundwork, states could use the AI Impact Summit in India in February 2026 to endorse initial red lines for AI. Such red lines could build on the Seoul Commitments by translating voluntary corporate pledges into shared risk thresholds that, in turn, could be embedded in national regulation and public procurement.
- The UN Global Dialogue on AI Governance could lead a global consultation with scientists, civil society, and industry to define a set of clear and verifiable red lines, to be summarized in an outcome document at the July 2026 Dialogue in Geneva.
- By the end of 2026, either (i) a UN General Assembly resolution could be initiated, noting and welcoming these red lines and inviting negotiations, or (ii) a joint ministerial statement by an alliance of willing states could launch negotiations for a binding international treaty.
Media Inquiries
For media inquiries, please contact us at media@red-lines.ai. For any other questions, including those related to signatory corrections, contact us at contact@red-lines.ai.