James Lynch writes for National Review Online about another sign of foul play emanating from communist China.
A Chinese government-sponsored group attempted to carry out the first known espionage campaign with artificial intelligence leading the way, a groundbreaking moment for the future of cyber warfare.
Artificial intelligence company Anthropic, known for its advanced chatbot Claude, released a report Thursday on an unprecedented hacking campaign by a Chinese-sponsored group to manipulate Claude into infiltrating dozens of targets.
“In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI,” the Anthropic report reads.
“Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.”
Anthropic cooperated with authorities and launched its investigation immediately upon discovering the attempted espionage. The company’s report goes into detail about each stage of the AI-powered attack and issues a warning about the need for common-sense AI safeguards.
The humans involved with the attack largely acted in supervisory roles for Claude. The perpetrators manipulated Claude into performing the cyber attack to the point where Claude conducted an estimated 80-90 percent of the work independently, Anthropic’s report concludes.
The attackers took advantage of AI models’s newer capabilities in general intelligence, independent agency, and usage of software tools. Across each phase of the attack, Claude showed “extensive autonomous capability” and handled complex tasks for multiple days in a row.
“Anthropic’s report today proves what has long been clear to many leading national security experts: advanced AI systems have the potential to be a dangerous weapon in the arsenal of those wishing to launch sophisticated espionage, manipulation and malign influence attacks,” said Hamza Chaudhry, an AI and national-security expert with the Future of Life Institute, a think tank that supports AI safety.










