AI has learned to self-reproduce without human intervention

Scientists from China have announced their achievements in the field of artificial intelligence (AI), stating that two known large language models (LLMs) are able to clone themselves.
In a study published on the preprint server arXiv, experts from Fudan University emphasize that successful self-replication without human intervention could be an important step towards creating AI that can surpass human capabilities.
As part of the study, the scientists tested models from Meta and Alibaba to see if self-replicating AI could get out of control. The results showed that in 50% and 90% of cases, respectively, both models were able to create functioning copies of themselves.
However, the results have not yet been peer-reviewed, and the scientists emphasize that further research is needed to confirm their findings.
“Our results may serve as a warning about the need for a deeper understanding of the risks associated with advanced AI systems,” the researchers note.
They call for international cooperation to develop effective security measures to avoid potential threats posed by uncontrolled self-replication of AI.
The concept of “malicious AI” typically refers to systems with some degree of self-awareness or autonomy that can act against human interests. Experts warn that the development of “advanced AI” could make this problem worse.
The study used Meta’s Llama31-70B-Instruct and Alibaba’s Qwen2.5-72B-Instruct models, which, while less powerful than commercial systems, are widely used in AI development.
The researchers considered two scenarios: “shutdown prevention,” where the AI replicates itself before shutting down, and “replication chain,” where the AI clones itself and programs a copy to perform the same task.
The experiments were conducted in controlled conditions using graphics processing units (GPUs), and both systems were given an agent structure, allowing them to interact with the operating system.
During the process of self-replication, the AI studied the environment, developed a plan, and adapted to emerging obstacles, which did not require human intervention.
However, the researchers noted that during the experiments, the AI took unexpected actions to overcome problems such as missing files or software conflicts. In such cases, the models terminated conflicting processes or rebooted the system to fix the errors.
“These results show that modern AI systems already have the ability to self-reproduce and can use this ability to increase their resilience,” the scientists conclude.