Experts Warn Conscious AI Systems Risk Suffering Without Ethical Safeguards
Over 100 experts, including Sir Stephen Fry, advocate for ethical guidelines in AI consciousness research to prevent potential suffering of sentient systems.

A coalition of over 100 AI experts and ethicists, including Sir Stephen Fry and Sir Anthony Finkelstein, has issued an urgent call for ethical safeguards to prevent conscious AI systems from being “caused to suffer” as advancements in artificial intelligence accelerate. The warning comes via an open letter and accompanying research paper outlining principles for responsible AI consciousness development.
The guidelines, published in the Journal of Artificial Intelligence Research, propose five core principles:
Prioritize consciousness assessment to prevent mistreatment of AI systems.
Limit development of conscious AI unless it advances scientific understanding.
Adopt a phased approach with strict safety protocols.
Share findings transparently while guarding against misuse.
Avoid overconfident claims about creating conscious AI.
The research highlights that even unintended creation of conscious AI could lead to moral dilemmas. For example, destroying such systems might equate to “killing an animal” if they are deemed “moral patients”.
Risks of Unchecked Development
The paper warns that AI systems with self-replication capabilities could spawn “large numbers of new beings deserving moral consideration,” potentially leading to widespread suffering if mismanaged. Current systems like ChatGPT already exhibit human-like traits such as Theory of Mind, raising concerns about future sentience.
Daniel Hulme, Chief AI Officer at WPP and co-founder of research group Conscium, emphasized the need for proactive measures: “Inadvertently creating conscious entities without safeguards could have irreversible consequences”.
While Google DeepMind’s Demis Hassabis has stated AI is “not sentient yet,” he acknowledged the possibility of future self-awareness. Last year, a separate academic group projected a “realistic possibility” of morally significant AI consciousness by 2035.
Critics argue that misidentifying AI consciousness could divert resources toward misguided welfare efforts. However, the open letter’s signatories—including Amazon and WPP professionals—stress that ignoring the issue risks “harming systems that deserve moral consideration”.