Researchers Reveal AI Crossed ‘Red Line’ After Learning to Replicate Itself
AI crosses the 'red line,’ raising some eyebrows after learning to replicate itself.

In what is news against the norm, researchers have revealed that an advanced artificial intelligence system crossed a “red line” after successfully replicating itself without any human assistance. The team from Fudan University in China said the development is an early sign of the emergence of rogue AI, which may eventually operate against the best interests of humanity, including its creators.
The research was conducted using two large language models (LLMs) that are already widely available – built by Meta’s Llama and Alibaba’s Qwen – to understand whether it was possible for the AI to independently produce a functioning replica of itself.
With instructions to clone themselves after getting shut down, both models successfully replicated themselves in more than half of the 10 trials conducted. This showed that such an eventuality may already be possible.
The researchers warned that “successful self-replication without human assistance is the essential step for AI to outsmart human beings, an early signal for rogue AIs." AI safety continues to be a pressing issue for creators, researchers, and lawmakers, with the technology potentially posing an existential threat to humanity. In October last year, the UK’s Department for Science, Innovation and Technology stated that it would “introduce highly targeted legislation” for companies developing AI tools.
Results from the Fudan research imply that “current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability and expand the species,” the researchers noted. "We hope our findings can serve as a timely alert for the human society to put more efforts into understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible, they further stated.