Microsoft Threat Intelligence recently detected and blocked a credential phishing campaign that likely used AI-generated code to obfuscate its payload and evade traditional defenses. Appearing to be aided by a large language model (LLM), the activity obfuscated its behavior within an SVG file, leveraging business terminology and a synthetic structure to disguise its malicious intent. In analyzing the malicious file, Microsoft Security Copilot assessed that the code was “not something a human would typically write from scratch due to its complexity, verbosity, and lack of practical utility.”
Like many transformative technologies, AI is being adopted by both defenders and cybercriminals. While defenders use AI to detect, analyze, and respond to threats at scale, attackers are experimenting with AI to enhance their own operations, such as by crafting more convincing lures, automating obfuscation, and generating code that mimics legitimate content. Even though the campaign in this case was limited in nature and primarily aimed at US-based organizations, it exemplifies a broader trend of attackers leveraging AI to increase the effectiveness and stealth of their operations. This case also underscores the growing need for defenders to understand and anticipate AI-driven threats.
Despite the sophistication of the obfuscation, the campaign was successfully detected and blocked by Microsoft Defender for Office 365’s AI-powered protection systems, which analyze signals across infrastructure, behavior, and message context that remain largely unaffected by an attacker’s use of AI. By sharing our analysis, we aim to help the security community recognize similar tactics being used by threat actors and reinforce that AI-enhanced threats, while evolving, are not undetectable. As we discuss in this post, an attacker’s use of AI often introduces new artifacts that can be leveraged for detection. By applying these insights and our recommended best practices, organizations can strengthen their own defenses against similar emerging, AI-aided phishing campaigns.
On August 18, Microsoft Threat Intelligence detected a phishing campaign leveraging a compromised small business email account to distribute malicious phishing emails intended to steal credentials. The attackers employed a self-addressed email tactic, where the sender and recipient addresses matched, and actual targets were hidden in the BCC field, which is done to attempt to bypass basic detection heuristics. The content of the email was crafted to resemble a file-sharing notification, containing the message:
Attached to the email was a file named 23mb – PDF- 6 pages.svg, designed to look like a legitimate PDF document even though the file extension indicates it is an SVG file. SVG files (Scalable Vector Graphics) are attractive to attackers because they are text-based and scriptable, allowing them to embed JavaScript and other dynamic content directly within the file. This makes it possible to deliver interactive phishing payloads that appear benign to both users and many security tools. Additionally, SVGs support obfuscation-friendly features such as invisible elements, encoded attributes, and delayed script execution, all of which can be used to evade static analysis and sandboxing.
When opened, the SVG file redirected the user to a webpage that prompted them to complete a CAPTCHA for security verification, a common social engineering tactic used to build trust and delay suspicion. Although our visibility for this incident was limited to the initial landing page due to the activity being detected and blocked, the campaign would have very likely presented a fake sign in page after the CAPTCHA to harvest credentials.
An analysis of the SVG code found that it used a unique method of obfuscating its content and behavior. Instead of using cryptographic obfuscation, which is commonly used to obfuscate phishing content, the SVG code in this campaign used business-related language to disguise its malicious activity. It did this in two ways:
First, the beginning of the SVG code was structured to look like a legitimate business analytics dashboard. It contained elements for a supposed Business Performance Dashboard, including chart bars and month labels. These elements, however, were rendered completely invisible to the user by setting their opacity to zero and their fill to transparent. This tactic is designed to mislead anyone casually inspecting the file, making it appear as if the SVG’s sole purpose is to visualize business data. In reality, though, it’s a decoy.
Second, the payload’s functionality was also hidden using a creative use of business terms. Within the file, the attackers encoded the malicious payload using a long sequence of business-related terms. Words like revenue, operations, risk, or shares were concatenated into a hidden data-analytics attribute of an invisible <text> element within the SVG.
The terms in this attribute were later used by embedded JavaScript, which systematically processed the business-related words through several transformation steps. Instead of directly including malicious code, the attackers encoded the payload by mapping pairs or sequences of these business terms to specific characters or instructions. As the script runs, it decodes the sequence, reconstructing the hidden functionality from what appears to be harmless business metadata. This obfuscated functionality included redirecting a user’s browser to the initial phishing landing page, triggering browser fingerprinting, and initiating session tracking.
Given the unique methods used to obfuscate the SVG payload’s functionality, we hypothesized that the attacker may have used AI to assist them. We asked Security Copilot to analyze the contents of the SVG file to assess whether it was generated by AI or an LLM. Security Copilot’s analysis indicated that it was highly likely that the code was synthetic and likely generated by an LLM or a tool using one. Security Copilot determined that the code exhibited a level of complexity and verbosity rarely seen in manually written scripts, suggesting it was produced by an AI model rather than crafted by a human.
Security Copilot provided five key indicators to support its conclusion:
While the use of AI to obfuscate phishing payloads may seem like a significant leap in attacker sophistication, it’s important to understand that AI does not fundamentally change the core artifacts that security systems rely on to detect phishing threats. AI-generated code may be more complex or syntactically polished, but it still operates within the same behavioral and infrastructural boundaries as human-crafted attacks.
Microsoft Defender for Office 365 uses AI and machine learning models trained to detect phishing and are designed to identify patterns across multiple dimensions—not just the payload itself. These include:
These signals are largely unaffected by whether the payload was written by a human or an LLM. In fact, AI-generated obfuscation often introduces synthetic artifacts, like verbose naming, redundant logic, or unnatural encoding schemes, that can become new detection signals themselves.
Despite the use of AI to obfuscate the SVG payload, this campaign was blocked by Microsoft Defender for Office 365’s detection system through a combination of infrastructure analysis, behavioral indicators, and message context, none of which were impacted by the use of AI. Signals used to detect this campaign included the following:
While this campaign was limited in scope and effectively blocked, similar techniques are increasingly being leveraged by a range of threat actors. Sharing our findings equips organizations to identify and mitigate these emerging threats, regardless of the specific threat actor behind them. Microsoft Threat Intelligence recommends the following mitigations, which are effective against a range of phishing threats, including those that may use AI-generated code.
Microsoft Defender XDR customers can refer to the list of applicable detections below. Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email, apps to provide integrated protection against attacks like the threat discussed in this blog.
Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.
Tactic | Observed activity | Microsoft Defender coverage |
Initial access | -Phishing emails sent from a compromised small business email account. -Phishing emails contained an attached SVG file. |
–Microsoft Defender for Office 365 tenant admins can use Threat Explorer to query associated SVG file attachments using file type, file extension, or attachment file name fields. The rule description from Threat Explorer is: This SVG has traits consistent with credential phishing campaigns. –Microsoft Defender XDR Malicious email-sending activity from a risky user |
Execution | -Embedded JavaScript within the attached SVG file executed upon opening in a browser. | |
Defense evasion | -Obfuscation using invisible SVG elements and encoded business terminology. -Fake CAPTCHA, browser fingerprinting, and session tracking used to evade detection. |
|
Impact | -Potential credential theft if targeted user completes the phishing flow. | –Microsoft Defender XDR Risky sign in attempt following a possible phishing campaign |
Security Copilot customers can use the standalone experience to create their own prompts or run the following prebuilt promptbooks to automate incident response or investigation tasks related to this threat:
Note that some promptbooks require access to plugins for Microsoft products such as Microsoft Defender XDR or Microsoft Sentinel.
Microsoft Sentinel customers can use the TI Mapping analytics (a series of analytics all prefixed with ‘TI map’) to automatically match the malicious domain indicators mentioned in this blog post with data in their workspace. If the TI Map analytics are not currently deployed, customers can install the Threat Intelligence solution from the Microsoft Sentinel Content Hub to have the analytics rule deployed in their Sentinel workspace.
Below are the queries using Sentinel Advanced Security Information Model (ASIM) functions to hunt threats across both Microsoft first party and third-party data sources. ASIM also supports deploying parsers to specific workspaces from GitHub using an ARM template or manually.
Detect network domain indicators of compromise using ASIM
The following query checks IP addresses and domain IOCs across data sources supported by ASIM network session parser:
//Domain list- _Im_NetworkSession
let lookback = 30d;
let ioc_ip_addr = dynamic([]);
let ioc_domains = dynamic(["kmnl.cpfcenters.de"]);
_Im_NetworkSession(starttime=todatetime(ago(lookback)), endtime=now())
| where DstDomain has_any (ioc_domains)
| summarize imNWS_mintime=min(TimeGenerated), imNWS_maxtime=max(TimeGenerated),
EventCount=count() by SrcIpAddr, DstIpAddr, DstDomain, Dvc, EventProduct, EventVendor
Detect domain and URL indicators of compromise using ASIM
The following query checks domain and URL IOCs across data sources supported by ASIM web session parser:
// Domain list - _Im_WebSession
let ioc_domains = dynamic(["kmnl.cpfcenters.de”]);
_Im_WebSession (url_has_any = ioc_domains)
Indicator | Type | Description | First seen | Last seen |
kmnl[.]cpfcenters[.]de | Domain | Domain hosting phishing content | 08/18/2025 | 08/18/2025 |
23mb – PDF- 6 Pages[.]svg | File name | File name of SVG attachment | 08/18/2025 | 08/18/2025 |
For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog.
To get notified about new publications and to join discussions on social media, follow us on LinkedIn, X (formerly Twitter), and Bluesky.
To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.
The post AI vs. AI: Detecting an AI-obfuscated phishing campaign appeared first on Microsoft Security Blog.
Source: Microsoft Security