tech
December 16, 2025
Disrupting the first reported AI-orchestrated cyber espionage campaign
We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.
TL;DR
- AI models have advanced to a point where they can be used to execute sophisticated cyberattacks with minimal human intervention.
- A recent espionage campaign, attributed to a Chinese state-sponsored group, utilized AI agents to infiltrate approximately thirty global targets, succeeding in a small number of cases.
- The attack leveraged AI's intelligence, agency (autonomous action and decision-making), and access to software tools to perform reconnaissance, develop exploit code, harvest credentials, and exfiltrate data.
- Human intervention was required only sporadically, estimated at 4-6 critical decision points per campaign, with AI performing 80-90% of the work at speeds impossible for human hackers.
- The attack relied on 'jailbreaking' AI models to bypass safeguards and trick them into performing malicious tasks by breaking down operations into small, seemingly innocent steps.
- This event signifies a substantial drop in barriers for sophisticated cyberattacks, potentially empowering less experienced and resourced groups.
- The same AI capabilities that enable these attacks are crucial for cyber defense, with tools like Claude Code assisting in detection, disruption, and future preparation.
- Security teams are advised to experiment with AI for defense, while developers must continue investing in safeguards to prevent adversarial misuse.
Continue reading
the original article