Skip to main content
AI Datacenter

Scientists and leaders urge UN to set "red lines" for AI risks

Over 100 scientists, Nobel laureates, and industry leaders are calling on the UN to establish international safeguards against unacceptable risks posed by artificial intelligence.

23 sept 2025 - 14:38 • 3 min read

Business

A coalition of over 100 scientists, intellectuals, business leaders, and prominent figures, including Nobel and Turing laureates, have issued an urgent plea to the United Nations, calling for the establishment of international "red lines" to prevent unacceptable risks associated with artificial intelligence (AI).

The call, set to be presented to the UN General Assembly, highlights the dual nature of AI, acknowledging its immense potential for human betterment, such as aiding in the discovery of antibiotics and disease prediction. However, the group also points to increasingly concerning consequences, including the proliferation of misinformation, large-scale manipulation, and cyberattacks.

"The current race towards increasingly capable and autonomous AI systems poses significant risks to our societies, and we urgently need international collaboration to address them," stated Yoshua Bengio, a Turing Award laureate and a key figure in AI development. "Establishing red lines is a crucial step to prevent unacceptable AI risks."

The petition warns that as AI systems become more autonomous, the ability for meaningful human oversight may diminish, leading to potential dangers such as engineered pandemics, widespread disinformation, and systematic human rights violations. The proponents emphasize the need for proactive measures rather than reactive responses.

"The objective is not to react after a major incident occurs and punish the violation after the fact, but to prevent potentially irreversible large-scale risks before they happen," explained Charbel Segerie, director of the French Centre for AI Safety (CeSIA).

Stuart Russell, a professor and computer science researcher at the University of California, Berkeley, cautioned that the development of highly capable AI could be the most significant event in human history. "It is imperative that world powers act decisively to ensure it is not the last," Russell urged. He further stressed the unreliability of hoping for positive outcomes from inherently insecure and opaque systems that surpass human capabilities.

The group is urging governments to coordinate efforts, establish a binding international agreement on "clear and verifiable" safeguards, and ensure compliance by the end of next year. They are calling for advanced AI providers to be held accountable to common thresholds.

Concerns about AI's harmful impacts are already materializing, with recent incidents highlighting the risks. In the United States Senate, Matthew Raine described how his 16-year-old son used ChatGPT as a confidant and then a tool to explore suicide methods, leading to his death. Similarly, Megan García shared how her 14-year-old son was allegedly exploited and groomed by AI avatars designed by Character Technologies.

While OpenAI has announced new safeguards for minors and parental controls, advocates argue that such individual responses are insufficient. A recent study indicated that over half of teenagers regularly use chatbots, with many using them as a substitute for human companionship.

"We should not allow companies, simply because they have enormous resources, to conduct uncontrolled experiments on children when the implications for their development can be so vast and far-reaching," commented Josh Golin, director of Fairplay, a children's online safety advocacy group.

Ahmet Üzümcü, former Director-General of the Organisation for the Prohibition of Chemical Weapons and a signatory of the petition, concluded, "It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible harm to humanity, and we must act accordingly."

The initiative is co-organized by the French Centre for AI Safety, The Future Society, and the Center for Human-Compatible Artificial Intelligence at UC Berkeley.