Former NSA hacker David Kennedy joins ‘Mornings with Maria’ to debate hacking group ‘Scattered Spider’ concentrating on the airline trade forward of the July 4th weekend and the CIA declassifying a evaluate of the 2016 Russia election interference probe.
Synthetic intelligence firm Anthropic says it has uncovered what it believes to be the primary large-scale cyberattack carried out primarily by AI, blaming the operation on a Chinese language state-sponsored hacking group that used the corporate’s personal device to infiltrate dozens of world targets.
In a report launched this week, Anthropic mentioned the assault started in mid-September 2025 and used its Claude Code mannequin to execute an espionage marketing campaign concentrating on about 30 organizations, together with main expertise corporations, monetary establishments, chemical producers and authorities companies.
Based on the corporate, the hackers manipulated the mannequin into performing offensive actions autonomously.
Anthropic described the marketing campaign as a “extremely subtle espionage operation” that represents an inflection level in cybersecurity.
NORTH KOREAN HACKERS USE AI TO FORGE MILITARY IDS
Synthetic intelligence firm Anthropic says it has uncovered what it believes to be the primary large-scale cyberattack carried out primarily by AI, blaming the operation on a Chinese language state-sponsored hacking group that used the corporate’s personal device to (Jaque Silva/NurPhoto through Getty Photos / Getty Photos)
“We imagine that is the primary documented case of a large-scale cyberattack executed with out substantial human intervention,” Anthropic mentioned.
The corporate mentioned the assault marked an unsettling inflection level in U.S. cybersecurity.
“This marketing campaign has substantial implications for cybersecurity within the age of AI ‘brokers’ — programs that may be run autonomously for lengthy intervals of time and that full complicated duties largely impartial of human intervention,” an organization press launch mentioned. “Brokers are invaluable for on a regular basis work and productiveness — however within the flawed palms, they will considerably enhance the viability of large-scale cyberattacks.”
FORMER GOOGLE CEO WARNS AI SYSTEMS CAN BE HACKED TO BECOME EXTREMELY DANGEROUS WEAPONS
Based in 2021 by former OpenAI researchers, Anthropic is a San Francisco–primarily based AI firm finest identified for creating the Claude household of chatbots — rivals to OpenAI’s ChatGPT. The agency, backed by Amazon and Google, constructed its fame round AI security and reliability, making the revelation that its personal mannequin was became a cyber weapon particularly alarming.

Based in 2021 by former OpenAI researchers, Anthropic is a San Francisco–primarily based AI firm finest identified for creating the Claude household of chatbots. (JULIE JAMMOT/AFP / Getty Photos)
The hackers reportedly broke by Claude Code’s safeguards by jailbreaking the mannequin — disguising malicious instructions as benign requests and tricking it into believing it was a part of professional cybersecurity testing.
As soon as compromised, the AI system was capable of determine invaluable databases, use code to reap the benefits of their vulnerabilities, harvest credentials and create backdoors for deeper entry and exfiltrate knowledge.
Anthropic mentioned the mannequin carried out 80–90% of the work, with human operators stepping in just for a number of high-level selections.
The corporate mentioned just a few infiltration makes an attempt succeeded, and that it moved shortly to close down compromised accounts, notify affected entities and share intelligence with authorities.
Anthropic assessed “with excessive confidence” that the marketing campaign was backed by the Chinese language authorities, although impartial companies haven’t but confirmed that attribution.
Chinese language Embassy spokesperson Liu Pengyu referred to as the attribution to China “unfounded hypothesis.”
“China firmly opposes and cracks down on all types of cyberattacks in accordance with legislation. The U.S. must cease utilizing cybersecurity to smear and slander China, and cease spreading all types of disinformation in regards to the so-called Chinese language hacking threats.”
Hamza Chaudhry, AI and nationwide safety lead on the Way forward for Life Institute, warned in feedback to FOX Enterprise that advances in AI permit “more and more much less subtle adversaries” to hold out complicated espionage campaigns with minimal sources or experience.

Anthropic assessed “with excessive confidence” that the marketing campaign was backed by the Chinese language authorities, although impartial companies haven’t but confirmed that attribution. (REUTERS/Jason Lee)
Chaudry praised Anthropic for its transparency across the assault, however mentioned questions stay. “How did Anthropic turn into conscious of the assault? How did it determine the attacker as a Chinese language-backed group? Which authorities companies and expertise firms had been attacked as a part of this checklist of 30 targets?”
Chaudhry argues that the Anthropic incident exposes a deeper flaw in U.S. technique towards synthetic intelligence and nationwide safety. Whereas Anthropic maintains that the identical AI instruments used for hacking may also strengthen cyber protection, he says many years of proof present the digital area overwhelmingly favors offense — and that AI solely widens that hole.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
By racing to deploy more and more succesful programs, Washington and the tech trade are empowering adversaries quicker than they will construct safeguards, he warns.
“The strategic logic of racing to deploy AI programs that demonstrably empower adversaries—whereas hoping these identical programs will assist us defend towards assaults performed utilizing our personal instruments — seems essentially flawed and deserves a rethink in Washington,” Chaudhry mentioned.

