Over 850 Tech Leaders Including Wozniak Call for Global Ban on Superintelligence Development
More than 850 technology experts, including Steve Wozniak and AI pioneers, urge an international halt on superintelligence development until safety, ethics, and control measures are globally agreed upon.
A coalition of over 850 prominent tech leaders, scientists, and AI experts has jointly issued a compelling call for a worldwide prohibition on the development of superintelligence until universally accepted safety and control protocols are established. The statement, which includes signatures from industry icons such as Steve Wozniak, underscores growing concerns about the potential risks associated with creating AI systems surpassing human intelligence.
The open letter emphasizes that as artificial intelligence powers advancements in various sectors, the unchecked development of superintelligent systems could pose unprecedented threats to humanity. While AI's benefits are widely acknowledged, the signatories warn that without stringent safety measures and global consensus, the pursuit of superintelligence might lead to unintended consequences, including loss of control or ethical violations.
The coalition advocates for an international treaty or regulatory framework that enforces strict limits on AI research related to superintelligence. They argue that collaboration among governments, researchers, and industry leaders is essential to prevent a reckless race that could lead to irreversible outcomes.
“This is a pivotal moment,” said Wozniak, co-founder of Apple and a vocal advocate for responsible AI development. “We need global cooperation to ensure that AI advances benefit everyone and do not become a tool for harm or domination.”
The call for a proactive stance comes amid increasing discussions about AI governance, especially with rapid technological breakthroughs that could enable the creation of autonomous, superintelligent agents. Critics and experts alike emphasize that safety protocols, transparent research practices, and accountability mechanisms must be prioritized before crossing critical thresholds.
The signatories highlight that this initiative is not meant to halt AI innovation entirely but to establish a pause that allows the international community to develop comprehensive safety standards. They urge policymakers, industry leaders, and researchers to take immediate action to formalize agreements and prevent a dangerous arms race in superintelligence development.
As AI continues its rapid evolution, the coming months could see significant shifts in global policy and regulation. The collective voice of these over 850 experts aims to shape that future responsibly, guiding AI development on a path safeguarded by consensus and shared ethical principles.

