海角社区app

Google disrupts hackers using AI to exploit an unknown weakness in a company’s digital defense

Google said Monday that it had disrupted a criminal group’s attempt to use to exploit another company’s previously unknown digital vulnerability, adding to heightened worries across government and private industry about AI’s risks for cybersecurity.

Google shared limited information about the attackers and the target, but John Hultquist, chief analyst at the tech giant鈥檚 threat intelligence arm, said it represents a moment cybersecurity experts have warned about for years: malicious hackers arming themselves with AI to supercharge their ability to break into the world鈥檚 computers.

鈥淚t鈥檚 here,鈥 Hultquist said. 鈥淭he era of AI-driven vulnerability and exploitation is already here.鈥

It comes at a time of leaps in AI’s abilities to find vulnerabilities, including the Mythos model announced a month ago by . Among those trying to bolster their defenses is President Donald Trump’s , which has shifted its approach in how it plans to vet the most powerful AI models before their public release.

After following through with a campaign promise to repeal Democratic around the fast-developing technology, the Republican administration and its allies are now sending mixed signals about the government playing a larger role in AI oversight.

鈥淪ome people don鈥檛 want there to be a regulatory response to this and others do,鈥 said Dean Ball, a senior fellow at the Foundation for American Innovation who was previously a White House tech policy adviser and a lead author of Trump鈥檚 AI policy roadmap last year.

鈥淚 don鈥檛 like regulation,鈥 Ball said. 鈥淚 would prefer for things not to be regulated. But I think we need to in this case.”

Google says it found evidence of AI helping in cyberattack

Google said it observed a group of prominent 鈥渢hreat actors鈥 planning a big operation relying on a bug they had found. The vulnerability allowed them to bypass two-factor authentication to access a popular online system administration tool, which Google declined to name.

The company called it a zero-day exploit, a cyberattack that takes advantage of a previously unknown security vulnerability. 鈥淶ero-day鈥 refers to the fact that the security engineers have had zero days to develop a fix for the vulnerability.

Google said it notified the affected company and law enforcement and was able to disrupt the operation before it caused any damage. But as it traced the hackers’ footprints, it found evidence they had used an AI large language model 鈥 the same technology that powers popular chatbots 鈥 to discover the vulnerability.

Google didn’t reveal which AI model was used in the cyberattack, only that it was most likely not Google’s own Gemini or Anthropic’s Claude Mythos. Google also didn’t reveal which group it suspected in the attack but said there was no evidence it was tied to an adversarial government, though the company said groups tied to China and North Korea have been exploring similar techniques.

Hultquist said that compared with government spies who typically work slowly and quietly, criminal hackers have some of the most to gain from AI’s 鈥渢remendous capability for speed鈥 in finding and weaponizing security bugs.

鈥淭here鈥檚 a race between you and them to stop them before they can essentially get whatever data they need to extort you with, or launch ransomware,鈥 he said in an interview. 鈥淎I is going to be a huge advantage because they can move a lot faster.鈥

Anthropic’s Mythos has sparked a panic and call for regulation

Trump’s Commerce Department announced last week that it signed new agreements with Google, Microsoft and Elon Musk’s xAI to evaluate their most powerful AI models before their public release, building on previous agreements the Biden administration made with Anthropic and . But the announcement later disappeared from the Commerce Department website.

It was the latest example of jumbled signals from the Trump administration in the month since Anthropic announced a new model it called Mythos that it said was so 鈥渟trikingly capable鈥 at hacking and cybersecurity work that it could only release it to a small group of trusted organizations.

Anthropic created an initiative called Project Glasswing bringing together tech giants including Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world鈥檚 critical software from 鈥渟evere鈥 fallout that the new model could pose to public safety, national security and the economy. But its was complicated by a public and legal fight with the Pentagon and Trump himself its AI technology.

Its top rival, OpenAI, has since introduced a similar model. The company said Friday it was releasing a specialized cybersecurity version of ChatGPT that would only be available to 鈥渄efenders responsible for securing critical infrastructure鈥 to help them find and patch vulnerabilities in their code.

Ball said he’s optimistic that, over the long term, AI tools that are increasingly good at coding will make us safer from the routine cyberattacks afflicting hospitals, schools and other organizations. In the meantime, however, he said there are 鈥渦ntold trillions of lines of software code鈥 supporting the world’s computing systems that are at risk if AI tools are unleashed to exploit all of their bugs.

It could take years to harden all of that software 鈥 a process that Ball believes would be aided by coordination from the U.S. government.

In the meantime, Ball predicts a 鈥渢ransitional period” where cybersecurity risks rise significantly and 鈥渢he world might actually be more dangerous.鈥

Copyright © 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.

Federal 海角社区app Network Logo
Log in to your 海角社区app account for notifications and alerts customized for you.