Archive

Archive for the ‘Detection and Response Team (DART)’ Category

Success in security: reining in entropy

May 20th, 2020 No comments

Your network is unique. It’s a living, breathing system evolving over time. Data is created. Data is processed. Data is accessed. Data is manipulated. Data can be forgotten. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment. No two networks on the planet are exactly the same, even if they operate within the same industry, utilize the exact same applications, and even hire workers from one another. In fact, the only attribute your network may share with another network is simply how unique they are from one another.

If we follow the analogy of an organization or network as a living being, it’s logical to drill down deeper, into the individual computers, applications, and users that function as cells within our organism. Each cell is unique in how it’s configured, how it operates, the knowledge or data it brings to the network, and even the vulnerabilities each piece carries with it. It’s important to note that cancer begins at the cellular level and can ultimately bring down the entire system. But where incident response and recovery are accounted for, the greater the level of entropy and chaos across a system, the more difficult it becomes to locate potentially harmful entities. Incident Response is about locating the source of cancer in a system in an effort to remove it and make the system healthy once more.

Let’s take the human body for example. A body that remains at rest 8-10 hours a day, working from a chair in front of a computer, and with very little physical activity, will start to develop health issues. The longer the body remains in this state, the further it drifts from an ideal state, and small problems begin to manifest. Perhaps it’s diabetes. Maybe it’s high blood pressure. Or it could be weight gain creating fatigue within the joints and muscles of the body. Your network is similar to the body. The longer we leave the network unattended, the more it will drift from an ideal state to a state where small problems begin to manifest, putting the entire system at risk.

Why is this important? Let’s consider an incident response process where a network has been compromised. As a responder and investigator, we want to discover what has happened, what the cause was, what the damage is, and determine how best we can fix the issue and get back on the road to a healthy state. This entails looking for clues or anomalies; things that stand out from the normal background noise of an operating network. In essence, let’s identify what’s truly unique in the system, and drill down on those items. Are we able to identify cancerous cells because they look and act so differently from the vast majority of the other healthy cells?

Consider a medium-size organization with 5,000 computer systems. Last week, the organization was notified by a law enforcement agency that customer data was discovered on the dark web, dated from two weeks ago. We start our investigation on the date we know the data likely left the network. What computer systems hold that data? What users have access to those systems? What windows of time are normal for those users to interact with the system? What processes or services are running on those systems? Forensically we want to know what system was impacted, who was logging in to the system around the timeframe in question, what actions were performed, where those logins came from, and whether there are any unique indicators. Unique indicators are items that stand out from the normal operating environment. Unique users, system interaction times, protocols, binary files, data files, services, registry keys, and configurations (such as rogue registry keys).

Our investigation reveals a unique service running on a member server with SQL Server. In fact, analysis shows that service has an autostart entry in the registry and starts the service from a file in the c:\windows\perflogs directory, which is an unusual location for an autostart, every time the system is rebooted. We haven’t seen this service before, so we investigate against all the systems on the network to locate other instances of the registry startup key or the binary files we’ve identified. Out of 5,000 systems, we locate these pieces of evidence on only three systems, one of which is a Domain Controller.

This process of identifying what is unique allows our investigative team to highlight the systems, users, and data at risk during a compromise. It also helps us potentially identify the source of attacks, what data may have been pilfered, and foreign Internet computers calling the shots and allowing access to the environment. Additionally, any recovery efforts will require this information to be successful.

This all sounds like common sense, so why cover it here? Remember we discussed how unique your network is, and how there are no other systems exactly like it elsewhere in the world? That means every investigative process into a network compromise is also unique, even if the same attack vector is being used to attack multiple organizational entities. We want to provide the best foundation for a secure environment and the investigative process, now, while we’re not in the middle of an active investigation.

The unique nature of a system isn’t inherently a bad thing. Your network can be unique from other networks. In many cases, it may even provide a strategic advantage over your competitors. Where we run afoul of security best practice is when we allow too much entropy to build upon the network, losing the ability to differentiate “normal” from “abnormal.” In short, will we be able to easily locate the evidence of a compromise because it stands out from the rest of the network, or are we hunting for the proverbial needle in a haystack? Clues related to a system compromise don’t stand out if everything we look at appears abnormal. This can exacerbate an already tense response situation, extending the timeframe for investigation and dramatically increasing the costs required to return to a trusted operating state.

To tie this back to our human body analogy, when a breathing problem appears, we need to be able to understand whether this is new, or whether it’s something we already know about, such as asthma. It’s much more difficult to correctly identify and recover from a problem if it blends in with the background noise, such as difficulty breathing because of air quality, lack of exercise, smoking, or allergies. You can’t know what’s unique if you don’t already know what’s normal or healthy.

To counter this problem, we pre-emptively bring the background noise on the network to a manageable level. All systems move towards entropy unless acted upon. We must put energy into the security process to counter the growth of entropy, which would otherwise exponentially complicate our security problem set. Standardization and control are the keys here. If we limit what users can install on their systems, we quickly notice when an untrusted application is being installed. If it’s against policy for a Domain Administrator to log in to Tier 2 workstations, then any attempts to do this will stand out. If it’s unusual for Domain Controllers to create outgoing web traffic, then it stands out when this occurs or is attempted.

Centralize the security process. Enable that process. Standardize security configuration, monitoring, and expectations across the organization. Enforce those standards. Enforce the tenet of least privilege across all user levels. Understand your ingress and egress network traffic patterns, and when those are allowed or blocked.

In the end, your success in investigating and responding to inevitable security incidents depends on what your organization does on the network today, not during an active investigation. By reducing entropy on your network and defining what “normal” looks like, you’ll be better prepared to quickly identify questionable activity on your network and respond appropriately. Bear in mind that security is a continuous process and should not stop. The longer we ignore the security problem, the further the state of the network will drift from “standardized and controlled” back into disorder and entropy. And the further we sit from that state of normal, the more difficult and time consuming it will be to bring our network back to a trusted operating environment in the event of an incident or compromise.

The post Success in security: reining in entropy appeared first on Microsoft Security.

Lessons learned from the Microsoft SOC—Part 3c: A day in the life part 2

May 5th, 2020 No comments

This is the sixth blog in the Lessons learned from the Microsoft SOC series designed to share our approach and experience from the front lines of our security operations center (SOC) protecting Microsoft and our Detection and Response Team (DART) helping our customers with their incidents. For a visual depiction of our SOC philosophy, download our Minutes Matter poster.

COVID-19 and the SOC

Before we conclude the day in the life, we thought we would share an analyst’s eye view of the impact of COVID-19. Our analysts are mostly working from home now and our cloud based tooling approach enabled this transition to go pretty smoothly. The differences in attacks we have seen are mostly in the early stages of an attack with phishing lures designed to exploit emotions related to the current pandemic and increased focus on home firewalls and routers (using techniques like RDP brute-forcing attempts and DNS poisoning—more here). The attack techniques they attempt to employ after that are fairly consistent with what they were doing before.

A day in the life—remediation

When we last left our heroes in the previous entry, our analyst had built a timeline of the potential adversary attack operation. Of course, knowing what happened doesn’t actually stop the adversary or reduce organizational risk, so let’s remediate this attack!

  1. Decide and act—As the analyst develops a high enough level of confidence that they understand the story and scope of the attack, they quickly shift to planning and executing cleanup actions. While this appears as a separate step in this particular description, our analysts often execute on cleanup operations as they find them.

Big Bang or clean as you go?

Depending on the nature and scope of the attack, analysts may clean up attacker artifacts as they go (emails, hosts, identities) or they may build a list of compromised resources to clean up all at once (Big Bang)

  • Clean as you go—For most typical incidents that are detected early in the attack operation, analysts quickly clean up the artifacts as we find them. This rapidly puts the adversary at a disadvantage and prevents them from moving forward with the next stage of their attack.
  • Prepare for a Big Bang—This approach is appropriate for a scenario where an adversary has already “settled in” and established redundant access mechanisms to the environment (frequently seen in incidents investigated by our Detection and Response Team (DART) at customers). In this case, analysts should avoid tipping off the adversary until full discovery of all attacker presence is discovered as surprise can help with fully disrupting their operation. We have learned that partial remediation often tips off an adversary, which gives them a chance to react and rapidly make the incident worse (spread further, change access methods to evade detection, inflict damage/destruction for revenge, cover their tracks, etc.).Note that cleaning up phishing and malicious emails can often be done without tipping off the adversary, but cleaning up host malware and reclaiming control of accounts has a high chance of tipping off the adversary.

These are not easy decisions to make and we have found no substitute for experience in making these judgement calls. The collaborative work environment and culture we have built in our SOC helps immensely as our analysts can tap into each other’s experience to help making these tough calls.

The specific response steps are very dependent on the nature of the attack, but the most common procedures used by our analysts include:

  • Client endpoints—SOC analysts can isolate a computer and contact the user directly (or IT operations/helpdesk) to have them initiate a reinstallation procedure.
  • Server or applications—SOC analysts typically work with IT operations and/or application owners to arrange rapid remediation of these resources.
  • User accounts—We typically reclaim control of these by disabling the account and resetting password for compromised accounts (though these procedures are evolving as a large amount of our users are mostly passwordless using Windows Hello or another form of MFA). Our analysts also explicitly expire all authentication tokens for the user with Microsoft Cloud App Security.
    Analysts also review the multi-factor phone number and device enrollment to ensure it hasn’t been hijacked (often contacting the user), and reset this information as needed.
  • Service Accounts—Because of the high risk of service/business impact, SOC analysts work with the service account owner of record (falling back on IT operations as needed) to arrange rapid remediation of these resources.
  • Emails—The attack/phishing emails are deleted (and sometimes cleared to prevent recovering of deleted emails), but we always save a copy of original email in the case notes for later search and analysis (headers, content, scripts/attachments, etc.).
  • Other—Custom actions can also be executed based on the nature of the attack such as revoking application tokens, reconfiguring servers and services, and more.

Automation and integration for the win

It’s hard to overstate the value of integrated tools and process automation as these bring so many benefits—improving the analysts daily experience and improving the SOC’s ability to reduce organizational risk.

  • Analysts spend less time on each incident, reducing the attacker’s time to operation—measured by mean time to remediate (MTTR).
  • Analysts aren’t bogged down in manual administrative tasks so they can react quickly to new detections (reducing mean time to acknowledge—MTTA).
  • Analysts have more time to engage in proactive activities that both reduce organization risk and increase morale by keeping them focused on the mission.

Our SOC has a long history of developing our own automation and scripts to make analysts lives easier by a dedicated automation team in our SOC. Because custom automation requires ongoing maintenance and support, we are constantly looking for ways to shift automation and integration to capabilities provided by Microsoft engineering teams (which also benefits our customers). While still early in this journey, this approach typically improves the analyst experience and reduces maintenance effort and challenges.

This is a complex topic that could fill many blogs, but this takes two main forms:

  • Integrated toolsets save analysts manual effort during incidents by allowing them to easily navigate multiple tools and datasets. Our SOC relies heavily on the integration of Microsoft Threat Protection (MTP) tools for this experience, which also saves the automation team from writing and supporting custom integration for this.
  • Automation and orchestration capabilities reduce manual analyst work by automating repetitive tasks and orchestrating actions between different tools. Our SOC currently relies on an advanced custom SOAR platform and is actively working with our engineering teams (MTP’s AutoIR capability and Azure Sentinel SOAR) on how to shift our learnings and workload onto those capabilities.

After the attacker operation has been fully disrupted, the analyst marks the case as remediated, which is the timestamp signaling the end of MTTR measurement (which started when the analyst began the active investigation in step 2 of the previous blog).

While having a security incident is bad, having the same incident repeated multiple times is much worse.

  1. Post-incident cleanup—Because lessons aren’t actually “learned” unless they change future actions, our analysts always integrate any useful information learned from the investigation back into our systems. Analysts capture these learnings so that we avoid repeating manual work in the future and can rapidly see connections between past and future incidents by the same threat actors. This can take a number of forms, but common procedures include:
    • Indicators of Compromise (IoCs)—Our analysts record any applicable IoCs such as file hashes, malicious IP addresses, and email attributes into our threat intelligence systems so that our SOC (and all customers) can benefit from these learnings.
    • Unknown or unpatched vulnerabilities—Our analysts can initiate processes to ensure that missing security patches are applied, misconfigurations are corrected, and vendors (including Microsoft) are informed of “zero day” vulnerabilities so that they can create security patches for them.
    • Internal actions such as enabling logging on assets and adding or changing security controls. 

Continuous improvement

So the adversary has now been kicked out of the environment and their current operation poses no further risk. Is this the end? Will they retire and open a cupcake bakery or auto repair shop? Not likely after just one failure, but we can consistently disrupt their successes by increasing the cost of attack and reducing the return, which will deter more and more attacks over time. For now, we must assume that adversaries will try to learn from what happened on this attack and try again with fresh ideas and tools.

Because of this, our analysts also focus on learning from each incident to improve their skills, processes, and tooling. This continuous improvement occurs through many informal and formal processes ranging from formal case reviews to casual conversations where they tell the stories of incidents and interesting observations.

As caseload allows, the investigation team also hunts proactively for adversaries when they are not on shift, which helps them stay sharp and grow their skills.

This closes our virtual shift visit for the investigation team. Join us next time as we shift to our Threat hunting team (a.k.a. Tier 3) and get some hard won advice and lessons learned.

…until then, share and enjoy!

P.S. If you are looking for more information on the SOC and other cybersecurity topics, check out previous entries in the series (Part 1 | Part 2a | Part 2b | Part 3a | Part 3b), Mark’s List (https://aka.ms/markslist), and our new security documentation site—https://aka.ms/securtydocs. Be sure to bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. Or reach out to Mark on LinkedIn or Twitter.

The post Lessons learned from the Microsoft SOC—Part 3c: A day in the life part 2 appeared first on Microsoft Security.

Ghost in the shell: Investigating web shell attacks

February 4th, 2020 No comments

Recently, an organization in the public sector discovered that one of their internet-facing servers was misconfigured and allowed attackers to upload a web shell, which let the adversaries gain a foothold for further compromise. The organization enlisted the services of Microsoft’s Detection and Response Team (DART) to conduct a full incident response and remediate the threat before it could cause further damage.

DART’s investigation showed that the attackers uploaded a web shell in multiple folders on the web server, leading to the subsequent compromise of service accounts and domain admin accounts. This allowed the attackers to perform reconnaissance using net.exe, scan for additional target systems using nbstat.exe, and eventually move laterally using PsExec.

The attackers installed additional web shells on other systems, as well as a DLL backdoor on an Outlook Web Access (OWA) server. To persist on the server, the backdoor implant registered itself as a service or as an Exchange transport agent, which allowed it to access and intercept all incoming and outgoing emails, exposing sensitive information. The backdoor also performed additional discovery activities as well as downloaded other malware payloads. In addition, the attackers sent special emails that the DLL backdoor interpreted as commands.

Figure 1. Sample web shell attack chain

The case is one of increasingly more common incidents of web shell attacks affecting multiple organizations in various sectors. A web shell is a piece of malicious code, often written in typical web development programming languages (e.g., ASP, PHP, JSP), that attackers implant on web servers to provide remote access and code execution to server functions. Web shells allow adversaries to execute commands and to steal data from a web server or use the server as launch pad for further attacks against the affected organization.

With the use of web shells in cyberattacks on the rise, Microsoft’s DART, the Microsoft Defender ATP Research Team, and the Microsoft Threat Intelligence Center (MSTIC) have been working together to investigate and closely monitor this threat.

Web shell attacks in the current threat landscape

Multiple threat actors, including ZINC, KRYPTON, and GALLIUM, have been observed utilizing web shells in their campaigns. To implant web shells, adversaries take advantage of security gaps in internet-facing web servers, typically vulnerabilities in web applications, for example CVE-2019-0604 or CVE-2019-16759.

In our investigations into these types of attacks, we have seen web shells within files that attempt to hide or blend in by using names commonly used for legitimate files in web servers, for example:

  • index.aspx
  • fonts.aspx
  • css.aspx
  • global.aspx
  • default.php
  • function.php
  • Fileuploader.php
  • help.js
  • write.jsp
  • 31.jsp

Among web shells used by threat actors, the China Chopper web shell is one of the most widely used. One example is written in JSP:

We have seen this malicious JSP code within a specially crafted file uploaded to web servers:

Figure 2. Specially crafted image file with malicious JSP code

Another China Chopper variant is written in PHP:

Meanwhile, the KRYPTON group uses a bespoke web shell written in C# within an ASP.NET page:

Figure 3. Web shell written in C# within an ASP.NET page

Once a web shell is successfully inserted into a web server, it can allow remote attackers to perform various tasks on the web server. Web shells can steal data, perpetrate watering hole attacks, and run other malicious commands for further compromise.

Web shell attacks have affected a wide range of industries. The organization in the public sector mentioned above represents one of the most common targeted sectors.

Aside from exploiting vulnerabilities in web applications or web servers, attackers take advantage of other weaknesses in internet-facing servers. These include the lack of the latest security updates, antivirus tools, network protection, proper security configuration, and informed security monitoring. Interestingly, we observed that attacks usually occur on weekends or during off-hours, when attacks are likely not immediately spotted and responded to.

Unfortunately, these gaps appear to be widespread, given that every month, Microsoft Defender Advanced Threat Protection (ATP) detects an average of 77,000 web shell and related artifacts on an average of 46,000 distinct machines.

Figure 3: Web shell encounters 

Detecting and mitigating web shell attacks

Because web shells are a multi-faceted threat, enterprises should build comprehensive defenses for multiple attack surfaces. Microsoft Threat Protection provides unified protection for identities, endpoints, email and data, apps, and infrastructure. Through signal-sharing across Microsoft services, customers can leverage Microsoft’s industry-leading optics and security technologies to combat web shells and other threats.

Gaining visibility into internet-facing servers is key to detecting and addressing the threat of web shells. The installation of web shells can be detected by monitoring web application directories for web script file writes. Applications such as Outlook Web Access (OWA) rarely change after they have been installed and script writes to these application directories should be treated as suspicious.

After installation, web shell activity can be detected by analyzing processes created by the Internet Information Services (IIS) process w3wp.exe. Sequences of processes that are associated with reconnaissance activity such as those identified in the alert screenshot (net.exe, ping.exe, systeminfo.exe, and hostname.exe) should be treated with suspicion. Web applications such as OWA run from well-defined Application Pools. Any cmd.exe process execution by w3wp.exe running from an application pool that doesn’t typically execute processes such as ‘MSExchangeOWAAppPool’ should be treated as unusual and regarded as potentially malicious.

Microsoft Defender ATP exposes these behaviors that indicate web shell installation and post-compromise activity by analyzing script file writes and process executions. When alerted of these activities, security operations teams can then use the rich capabilities in Microsoft Defender ATP to investigate and resolve web shell attacks.

Figure 4. Sample Microsoft Defender ATP alerts related to web shell attacks

Figure 5. Microsoft Defender ATP alert process tree

As in most security issues, prevention is critical. Organizations can harden systems against web shell attacks by taking these preventive steps:

  • Identify and remediate vulnerabilities or misconfigurations in web applications and web servers. Deploy latest security updates as soon as they become available.
  • Audit and review logs from web servers frequently. Be aware of all systems you expose directly to the internet.
  • Utilize the Windows Defender Firewall, intrusion prevention devices, and your network firewall to prevent command-and-control server communication among endpoints whenever possible. This limits lateral movement as well as other attack activities.
  • Check your perimeter firewall and proxy to restrict unnecessary access to services, including access to services through non-standard ports.
  • Enable cloud-delivered protection to get the latest defenses against new and emerging threats.
  • Educate end users about preventing malware infections. Encourage end users to practice good credential hygiene—limit the use of accounts with local or domain admin privileges.

 

 

Detection and Response Team (DART)

Microsoft Defender ATP Research Team

Microsoft Threat Intelligence Center (MSTIC)

 

The post Ghost in the shell: Investigating web shell attacks appeared first on Microsoft Security.

Ransomware response—to pay or not to pay?

December 16th, 2019 No comments

The increased connectivity of computers and the growth of Bring Your Own Device (BYOD) in most organizations is making the distribution of malicious software (malware) easier. Unlike other types of malicious programs that may usually go undetected for a longer period, a ransomware attack is usually experienced immediately, and its impact on information technology infrastructure is often irreversible.

As part of Microsoft’s Detection and Response Team (DART) Incident Response engagements, we regularly get asked by customers about “paying the ransom” following a ransomware attack. Unfortunately, this situation often leaves most customers with limited options, depending on the business continuity and disaster recovery plans they have in place.

The two most common options are either to pay the ransom (with the hopes that the decryption key obtained from the malicious actors works as advertised) or switch gears to a disaster recovery mode, restoring systems to a known good state.

The unfortunate truth about most organizations is that they are often only left with the only option of paying the ransom, as the option to rebuild is taken off the table by lack of known good backups or because the ransomware also encrypted the known good backups. Moreover, a growing list of municipalities around the U.S. has seen their critical infrastructure, as well as their backups, targeted by ransomware, a move by threat actors to better guarantee a payday.

We never encourage a ransomware victim to pay any form of ransom demand. Paying a ransom is often expensive, dangerous, and only refuels the attackers’ capacity to continue their operations; bottom line, this equates to a proverbial pat on the back for the attackers. The most important thing to note is that paying cybercriminals to get a ransomware decryption key provides no guarantee that your encrypted data will be restored.

So, what options do we recommend? The fact remains that every organization should treat a cybersecurity incident as a matter of when it will happen and not whether it will happen. Having this mindset helps an organization react quickly and effectively to such incidents when they happen. Two major industry standard frameworks, the Sysadmin, Audit, Network, and Security (SANS) and the National Institute of Standards and Technology (NIST), both have published similar concepts on responding to malware and cybersecurity incidents. The bottom line is that every organization needs to be able to plan, prepare, respond, and recover when faced with a ransomware attack.

Outlined below are steps designed to help organizations better plan and prepare to respond to ransomware and major cyber incidents.

How to plan and prepare to respond to ransomware

1. Use an effective email filtering solution

According to the Microsoft Security Intelligence Report Volume 24 of 2018, spam and phishing emails are still the most common delivery method for ransomware infections. To effectively stop ransomware at its entry point, every organization needs to adopt an email security service that ensures all email content and headers entering and leaving the organization are scanned for spam, viruses, and other advanced malware threats. By adopting an enterprise-grade email protection solution, most cybersecurity threats against an organization will be blocked at ingress and egress.

2. Regular hardware and software systems patching and effective vulnerability management

Many organizations are still failing to adopt one of the age-old cybersecurity recommendations and important defenses against cybersecurity attacks—applying security updates and patches as soon as the software vendors release them. A prominent example of this failure was the WannaCry ransomware events in 2017, one of the largest global cybersecurity attacks in the history of the internet, which used a leaked vulnerability in Windows networking Server Message Block (SMB) protocol, for which Microsoft had released a patch nearly two months before the first publicized incident. Regular patching and an effective vulnerability management program are important measures to defend against ransomware and other forms of malware and are steps in the right direction to ensure every organization does not become a victim of ransomware.

3. Use up-to-date antivirus and an endpoint detection and response (EDR) solution

While owning an antivirus solution alone does not ensure adequate protection against viruses and other advanced computer threats, it’s very important to ensure antivirus solutions are kept up to date with their software vendors. Attackers invest heavily in the creation of new viruses and exploits, while vendors are left playing catch-up by releasing daily updates to their antivirus database engines. Complementary to owning and updating an antivirus solution is the use of EDR solutions that collect and store large volumes of data from endpoints and provide real-time host-based, file-level monitoring and visibility to systems. The data sets and alerts generated by this solution can help to stop advanced threats and are often leveraged for responding to security incidents.

4. Separate administrative and privileged credentials from standard credentials

Working as a cybersecurity consultant, one of the first recommendations I usually provide to customers is to separate their system administrative accounts from their standard user accounts and to ensure those administrative accounts are not useable across multiple systems. Separating these privileged accounts not only enforces proper access control but also ensures that a compromise of a single account doesn’t lead to the compromise of the entire IT infrastructure. Additionally, using Multi-Factor Authentication (MFA), Privileged Identity Management (PIM), and Privileged Access Management (PAM) solutions are ways to effectively combat privileged account abuse and a strategic way of reducing the credential attack surface.

5. Implement an effective application whitelisting program

It’s very important as part of a ransomware prevention strategy to restrict the applications that can run within an IT infrastructure. Application whitelisting ensures only applications that have been tested and approved by an organization can run on the systems within the infrastructure. While this can be tedious and presents several IT administrative challenges, this strategy has been proven effective.

6. Regularly back up critical systems and files

The ability to recover to a known good state is the most critical strategy of any information security incident plan, especially ransomware. Therefore, to ensure the success of this process, an organization must validate that all its critical systems, applications, and files are regularly backed up and that those backups are regularly tested to ensure they are recoverable. Ransomware is known to encrypt or destroy any file it comes across, and it can often make them unrecoverable; consequently, it’s of utmost importance that all impacted files can be easily recovered from a good backup stored at a secondary location not impacted by the ransomware attack.

Learn more and keep updated

Learn more about how DART helps customers respond to compromises and become cyber-resilient. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Ransomware response—to pay or not to pay? appeared first on Microsoft Security.