Skip to content
Menu
Menu

Google Says Hackers Used AI To Help Find And Exploit Unknown Software Flaw

The company identified a case where hackers used AI tools to help discover and develop an exploit targeting a zero-day vulnerability.

 

Google said it stopped a cyberattack in which a threat actor used AI tools to find and exploit a previously unknown software flaw.

The incident involved a “zero-day” vulnerability, meaning the software maker was unaware of the flaw, and no fix was available at the time.

According to Google’s Threat Intelligence Group, the attacker used AI to help analyze software and identify a weakness that could be used to gain access. The AI tools were also used to assist in developing the code needed to exploit that weakness.

The target was a widely used, web-based system administration tool. The attack attempted to exploit a logic flaw that could bypass access controls. Google said it detected and blocked the activity before hackers exploited the weakness.

Google said it has “high confidence” that AI was used in the operation, based on characteristics of the code, including structured formatting and elements that resembled AI-generated output.

The company did not identify the attacker, the affected software vendor, or the specific AI tools used.

Google said this is the first case it has observed where AI tools were used to help identify and exploit a previously unknown vulnerability.

The company also stated what did not happen. It said the attack was not fully automated and did not involve an AI system independently carrying out the full process without human involvement.

In most cases tracked by Google, attackers still use AI to speed up existing tasks, such as scanning for known vulnerabilities or generating basic exploit code. In this case, AI use extended into earlier steps tied to finding a new flaw.

Google did not say how much of the vulnerability discovery process was performed by AI versus human operators, nor did it provide data on how often this type of activity occurs.

The company said the incident shows that AI tools can now be used earlier in cyberattacks, including in work tied to identifying new software vulnerabilities, even though human operators remain involved in directing and executing the attack.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.

Advertise with AI RIsk Today, Today!