Loading...

Google Uncovers AI Malware That Evades Detection via Gemini

Google’s Threat Intelligence Group has identified AI-powered malware that rewrites its own code during attacks using Gemini, signalling a shift to more autonomous cyber threats.

AvatarJR

By Jace Reed

4 min read

Image Credit: Unsplash
Image Credit: Unsplash

Google raised global cybersecurity alarms by confirming the discovery of AI-driven malware that rewrites its own code in real time during attacks.

This groundbreaking revelation came from Google’s Threat Intelligence Group, which tracked unprecedented behavior in several newly identified malware families targeting critical systems in 2025.

The threat actors are using the Gemini large language model to create malware that constantly adapts, challenging the effectiveness of signature-based detection.

Researchers described evidence of operational AI-enabled malware as a 'significant step toward more autonomous and adaptive cyberattacks.'

How Was AI Malware First Detected in the Wild?

Researchers from Google’s Threat Intelligence Group documented the first observed instances of malware using AI to change its source code while live on target systems.

Previously, such techniques were limited to academic proof-of-concept or speculative research. The 2025 findings marked a move from theory into operational cybercrime.

The detection began with the identification of PROMPTFLUX in June 2025. Analysts noted that the malware sent continuous queries to Gemini, which returned new, obfuscated versions of its own VBScript code, effectively allowing it to morph faster than traditional defense tools could respond.

Did you know?
PROMPTFLUX interacts with the Google Gemini API (specifically, Gemini 1.5 Flash or later) to dynamically request new code. This is a rare, early example of malware using a commercial, multi-purpose AI API for core functionality.

What Makes PROMPTFLUX and PROMPTSTEAL Unique?

PROMPTFLUX is characterized by its 'Thinking Robot' module, specifically engineered to request fresh variants of malicious code from Gemini on a scheduled basis.

Each version received is written with explicit instructions to evade known antivirus signatures and is stored in the system startup for persistent operation.

PROMPTSTEAL, by contrast, simulates an image-generation program while secretly communicating with the Qwen2.5-Coder-32B-Instruct model via an API.

It dynamically generates Windows commands designed to siphon documents and steal sensitive data from targeted computers, primarily within high-profile organizations.

How Are State-Sponsored Hackers Using AI Malware?

Google’s report ties some of the most sophisticated usage to Russian cyber operations, notably by the state-backed group APT28. These attackers deployed PROMPTSTEAL in campaigns against targets in Ukraine, demonstrating real-world use of AI malware as a strategic tool in geopolitical conflict.

Investigations further revealed that Chinese, Iranian, and North Korean groups are leveraging large language models to accelerate every phase of attack from initial reconnaissance using deceptive credentials to crafting custom exploits and automating malware evolution.

ALSO READ | DATALAND Museum Debuts with Advanced AI Art in Downtown LA

What Does This Mean for Cybersecurity Defenses?

The emergence of self-rewriting malware powered by AI raises the bar for defenders everywhere. Signature-based solutions, which rely on identifying static patterns in malicious code, are far less effective against threats that can change their appearance every hour.

Cybersecurity experts warn that organizations must shift their focus to behavior-based detection, AI-powered anomaly monitoring, and deeper integration between endpoint protection systems and real-time threat intelligence updates from sources such as Google’s Gemini safeguards.

Google has responded by disabling abusive accounts linked to these attacks and by rapidly strengthening safeguards in Gemini and related AI services.

The company collaborates with global security partners to identify new AI abuse patterns and share threat intelligence to protect users.

These steps are seen as urgently needed, as Google’s analysts predict that AI-enhanced malware will become the standard for organized criminals and state actors alike.

The race is on to develop equally adaptive AI-based defenses to meet this new wave of evolving threats.

Security teams everywhere are now reassessing their threat models as AI malware moves from rare prototype to daily operational reality.

The battle for cybersecurity dominance is set to intensify, with attackers and defenders both looking to gain the upper hand in artificial intelligence-powered warfare.

(0)

Please sign in to leave a comment

Related Articles
© 2025 Wordwise Media.
All rights reserved.