Build support for open source in your organization

May 21st, 2020 No comments

Have you ever stared at the same lines of code for hours only to have a coworker identify a bug after just a quick glance? That’s the power of community! Open source software development is guided by the philosophy that a diverse community will produce higher quality code by allowing anyone to review and contribute. Individuals and large enterprises, like Microsoft, have embraced open source to engage people who can help make solutions better. However, not all open source projects are equivalent in quality or support. And, when it comes to security tools, many organizations hesitate to adopt open source. So how should you approach selecting and onboarding the right open source solutions for your organization? Why don’t we ask the community!

Earlier this year at the RSA 2020 Conference, I had the pleasure of sitting on the panel, Open Source: Promise, Perils, and the Path Ahead. Joining me were Inigo Merino, CEO of Cienaga Systems; Dr. Kelley Misata, CEO, Sightline Security; and Lenny Zeltser, CISO, Axonius. In addition to her role at Sightline Security, Kelley also serves as the President and Executive Director of the Open Information Security Foundation (OISF), which builds Suricata, an open source threat detection engine. Lenny created and maintains a Linux distribution called REMnux that organizations use for malware analysis. Ed Moyle, a Partner at SecurityCurve, served as the moderator. Today I’ll share our collective advice for selecting open source components and persuading organizations to approve them.

Which open source solutions are right for your project?

Nobody wants to reinvent the wheel—or should I say, Python—during a big project. You’ve got enough to do already! Often it makes sense to turn to pre-built open source components and libraries. They can save you countless hours, freeing up time to focus on the features that differentiate your product. But how should you decide when to opt for open source? When presented with numerous choices, how do you select the best open source solutions for your company and project? Here are some of the recommendations we discussed during the panel discussion.

  1. Do you have the right staff? A new environment can add complexity to your project. It helps if people on the team have familiarity with the tool or have time to learn it. If your team understands the code, you don’t have to wait for a fix from the community to address bugs. As Lenny said at the conference, “The advantage of open source is that you can get in there and see what’s going on. But if you are learning as you go, it may slow you down. It helps to have the knowledge and capability to support the environment.”
  2. Is the component widely adopted? If lots of developers are using the software, it’s more likely the code is stable. With more eyes on the code, problems get surfaced and resolved faster.
  3. How active is the community? Ideally, the library and components that you use will be maintained and enhanced for years after you deploy it, but there’s no guarantee—that’s also true for commercial options, by the way. An active community makes it more likely that the project will be supported. Check to see when the base was last updated. Confirm that members answer questions from users.
  4. Is there a long-term vision for the technology? Look for a published roadmap for the project. A roadmap will give you confidence that people are committed to supporting and enhancing the project. It will also help you decide if the open source project aligns with your product roadmap. “For us, a big signal is the roadmap. Does the community have a vision? Do they have a path to get there?” asked Kelley.
  5. Is there a commercial organization associated with the project? Another way to identify a project that is here for the long term is if there is a commercial interest associated with it. If a commercial company is providing funding or support to an open source project, it’s more likely that the support will continue even as community members change. Lenny told the audience, “If there is a commercial funding arm, that gives me peace of mind that the tool is less likely to just disappear.”

Getting legal (or executives or product owners) on board

Choosing the perfect open source solution for your project won’t help if you can’t persuade product owners, legal departments, or executives to approve it. Many organizations and individuals worry about the risks associated with using open source. They may wonder if legal issues will arise if they don’t use the software properly. If the software lacks support or includes security bugs will the component put the company at risk? The following tips can help you mitigate these concerns:

  1. Adopt change management methodologies: Organizational change is hard because the unknown feels riskier than the known. Leverage existing risk management structures to help your organization evaluate and adopt open source. Familiar processes will help others become more comfortable with new tools. As Inigo said, “Recent research shows that in order to get change through, you need to reduce the perceived risk of adopting said change. To lower those barriers, leverage what the organization already has in terms of governance to transform this visceral fear of the unknown into something that is known and can be managed through existing processes.”
  2. Implement component lifecycle management: Develop a process to determine which components are acceptable for people in your organization to use. By testing components or doing static and dynamic analysis, you reduce the level of risk and can build more confidence with executives.
  3. Identify a champion: If someone in your organization is responsible for mitigating concerns with product owners and legal teams, it will speed up the process.
  4. Enlist help from the open source project: Many open source communities include people who can help you make the business case to your approvers. As Kelley said, “It’s also our job in the open source community to help have these conversations. We can’t just sit passively by and let the enterprise folks figure it out. We need to evangelize our own message. There are many open source projects with people like Lenny and me who will help you make the case.”

Microsoft believes that the only way we can solve our biggest security challenges is to work together. Open source is one way to do that. Next time you look for an open source solution consider trying today’s tips to help you select the right tools and gain acceptance in your organization.

Learn more

Next month, I’ll follow up this post with more details on how to implement component lifecycle management at your organization.

In the meantime, explore some of Microsoft’s open source solutions, such as The Microsoft Graph Toolkit, DeepSpeed, misticpy, and Attack Surface Analyzer.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. Or reach out to me on LinkedIn or Twitter.

The post Build support for open source in your organization appeared first on Microsoft Security.

Success in security: reining in entropy

May 20th, 2020 No comments

Your network is unique. It’s a living, breathing system evolving over time. Data is created. Data is processed. Data is accessed. Data is manipulated. Data can be forgotten. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment. No two networks on the planet are exactly the same, even if they operate within the same industry, utilize the exact same applications, and even hire workers from one another. In fact, the only attribute your network may share with another network is simply how unique they are from one another.

If we follow the analogy of an organization or network as a living being, it’s logical to drill down deeper, into the individual computers, applications, and users that function as cells within our organism. Each cell is unique in how it’s configured, how it operates, the knowledge or data it brings to the network, and even the vulnerabilities each piece carries with it. It’s important to note that cancer begins at the cellular level and can ultimately bring down the entire system. But where incident response and recovery are accounted for, the greater the level of entropy and chaos across a system, the more difficult it becomes to locate potentially harmful entities. Incident Response is about locating the source of cancer in a system in an effort to remove it and make the system healthy once more.

Let’s take the human body for example. A body that remains at rest 8-10 hours a day, working from a chair in front of a computer, and with very little physical activity, will start to develop health issues. The longer the body remains in this state, the further it drifts from an ideal state, and small problems begin to manifest. Perhaps it’s diabetes. Maybe it’s high blood pressure. Or it could be weight gain creating fatigue within the joints and muscles of the body. Your network is similar to the body. The longer we leave the network unattended, the more it will drift from an ideal state to a state where small problems begin to manifest, putting the entire system at risk.

Why is this important? Let’s consider an incident response process where a network has been compromised. As a responder and investigator, we want to discover what has happened, what the cause was, what the damage is, and determine how best we can fix the issue and get back on the road to a healthy state. This entails looking for clues or anomalies; things that stand out from the normal background noise of an operating network. In essence, let’s identify what’s truly unique in the system, and drill down on those items. Are we able to identify cancerous cells because they look and act so differently from the vast majority of the other healthy cells?

Consider a medium-size organization with 5,000 computer systems. Last week, the organization was notified by a law enforcement agency that customer data was discovered on the dark web, dated from two weeks ago. We start our investigation on the date we know the data likely left the network. What computer systems hold that data? What users have access to those systems? What windows of time are normal for those users to interact with the system? What processes or services are running on those systems? Forensically we want to know what system was impacted, who was logging in to the system around the timeframe in question, what actions were performed, where those logins came from, and whether there are any unique indicators. Unique indicators are items that stand out from the normal operating environment. Unique users, system interaction times, protocols, binary files, data files, services, registry keys, and configurations (such as rogue registry keys).

Our investigation reveals a unique service running on a member server with SQL Server. In fact, analysis shows that service has an autostart entry in the registry and starts the service from a file in the c:\windows\perflogs directory, which is an unusual location for an autostart, every time the system is rebooted. We haven’t seen this service before, so we investigate against all the systems on the network to locate other instances of the registry startup key or the binary files we’ve identified. Out of 5,000 systems, we locate these pieces of evidence on only three systems, one of which is a Domain Controller.

This process of identifying what is unique allows our investigative team to highlight the systems, users, and data at risk during a compromise. It also helps us potentially identify the source of attacks, what data may have been pilfered, and foreign Internet computers calling the shots and allowing access to the environment. Additionally, any recovery efforts will require this information to be successful.

This all sounds like common sense, so why cover it here? Remember we discussed how unique your network is, and how there are no other systems exactly like it elsewhere in the world? That means every investigative process into a network compromise is also unique, even if the same attack vector is being used to attack multiple organizational entities. We want to provide the best foundation for a secure environment and the investigative process, now, while we’re not in the middle of an active investigation.

The unique nature of a system isn’t inherently a bad thing. Your network can be unique from other networks. In many cases, it may even provide a strategic advantage over your competitors. Where we run afoul of security best practice is when we allow too much entropy to build upon the network, losing the ability to differentiate “normal” from “abnormal.” In short, will we be able to easily locate the evidence of a compromise because it stands out from the rest of the network, or are we hunting for the proverbial needle in a haystack? Clues related to a system compromise don’t stand out if everything we look at appears abnormal. This can exacerbate an already tense response situation, extending the timeframe for investigation and dramatically increasing the costs required to return to a trusted operating state.

To tie this back to our human body analogy, when a breathing problem appears, we need to be able to understand whether this is new, or whether it’s something we already know about, such as asthma. It’s much more difficult to correctly identify and recover from a problem if it blends in with the background noise, such as difficulty breathing because of air quality, lack of exercise, smoking, or allergies. You can’t know what’s unique if you don’t already know what’s normal or healthy.

To counter this problem, we pre-emptively bring the background noise on the network to a manageable level. All systems move towards entropy unless acted upon. We must put energy into the security process to counter the growth of entropy, which would otherwise exponentially complicate our security problem set. Standardization and control are the keys here. If we limit what users can install on their systems, we quickly notice when an untrusted application is being installed. If it’s against policy for a Domain Administrator to log in to Tier 2 workstations, then any attempts to do this will stand out. If it’s unusual for Domain Controllers to create outgoing web traffic, then it stands out when this occurs or is attempted.

Centralize the security process. Enable that process. Standardize security configuration, monitoring, and expectations across the organization. Enforce those standards. Enforce the tenet of least privilege across all user levels. Understand your ingress and egress network traffic patterns, and when those are allowed or blocked.

In the end, your success in investigating and responding to inevitable security incidents depends on what your organization does on the network today, not during an active investigation. By reducing entropy on your network and defining what “normal” looks like, you’ll be better prepared to quickly identify questionable activity on your network and respond appropriately. Bear in mind that security is a continuous process and should not stop. The longer we ignore the security problem, the further the state of the network will drift from “standardized and controlled” back into disorder and entropy. And the further we sit from that state of normal, the more difficult and time consuming it will be to bring our network back to a trusted operating environment in the event of an incident or compromise.

The post Success in security: reining in entropy appeared first on Microsoft Security.

Cybersecurity best practices to implement highly secured devices

May 20th, 2020 No comments

Almost three years ago, we published The Seven Properties of Highly Secured Devices, which introduced a new standard for IoT security and argued, based on an analysis of best-in-class devices, that seven properties must be present on every standalone device that connects to the internet in order to be considered secured. Azure Sphere, now generally available, is Microsoft’s entry into the market: a seven-properties-compliant, end-to-end product offering for building and deploying highly secured IoT devices.

Every connected device should be highly secured, even devices that seem simplistic, like a cactus watering sensor. The seven properties are always required. These details are captured in a new paper titled, Nineteen cybersecurity best practices used to implement the seven properties of highly secured devices in Azure Sphere. It focuses on why the seven properties are always required and describes best practices used to implement Azure Sphere. The paper provides detailed information about the architecture and implementation of Azure Sphere and discusses design decisions and trade-offs. We hope that the new paper can assist organizations and individuals in evaluating the measures used within Azure Sphere to improve the security of IoT devices. Companies may also want to use this paper as a reference, when assessing Azure Sphere or other IoT offerings.  In this blog post, we discuss one issue covered in the paper: why are the 7 properties always required?

Why are the seven properties applicable to every device that connects to the internet?

If an internet-connected device performs a non-critical function, why does it require all seven properties? Put differently, are the seven properties required only when a device might cause harm if it is hacked? Why would you still want to require an advanced CPU, a security subsystem, a hardware root of trust, and a set of services to secure a simple, innocuous device like a cactus water sensor?

Because any device can be the target of a hacker, and any hacked device can be weaponized.

Consider the Mirai botnet, a real-world example of IoT gone wrong. The Mirai botnet involved approximately 150,000 internet-enabled security cameras. The cameras were hacked and turned into a botnet that launched a distributed denial of service (DDoS) attack that took down internet access for a large portion of the eastern United States. For security experts analyzing this hack, the Mirai botnet was distressingly unsophisticated. It was also a relatively small-scale attack, considering that many IoT devices will sell more than 150,000 units.

Adding internet connectivity to a class of device means a single, remote attack can scale to hundreds of thousands or millions of devices. The ability to scale a single exploit to this degree is cause for reflection on the upheaval IoT brings to the marketplace. Once the decision is made to connect a device to the internet, that device has the potential to transform from a single-purpose device to a general-purpose computer capable of launching a DDoS attack against any target in the world. The Mirai botnet is also a demonstration that a manufacturer does not need to sell many devices to create the potential for a “weaponized” device.

IoT security is not only about “safety-critical” deployments. Any deployment of a connected device at scale requires the seven properties. In other words, the function, purpose, and cost of a device should not be the only considerations when deciding whether security is important.

The seven properties do not guarantee that a device will not be hacked. However, they greatly minimize certain classes of threats and make it possible to detect and respond when a hacker gains a toehold in a device ecosystem. If a device doesn’t have all seven, human practices must be implemented to compensate for the missing features. For example, without renewable security, a security incident will require disconnecting devices from the internet and then recalling those devices or dispatching people to manually patch every device that was attacked.

Implementation challenges

Some of the seven properties, such as a hardware-based root of trust and compartmentalization, require certain silicon features. Others, such as defense in-depth, require a certain software architecture as well as silicon features like the hardware-based root of trust. Finally, other properties, including renewable security, certificate-based authentication, and failure reporting, require not only silicon features and certain software architecture choices within the operating system, but also deep integration with cloud services. Piecing these critical pieces of infrastructure together is difficult and prone to errors. Ensuring that a device incorporates these properties could therefore increase its cost.

These challenges led us to believe the seven properties also created an opportunity for security-minded organizations to implement these properties as a platform, which would free device manufacturers to focus on product features, rather than security. Azure Sphere represents such a platform: the seven properties are designed and built into the product from the silicon up.

Best practices for implementing the seven properties

Based on our decades of experience researching and implementing secured products, we identified 19 best practices that were put into place as part of the Azure Sphere product. These best practices provide insight into why Azure Sphere sets such a high standard for security. Read the full paper, Nineteen cybersecurity best practices used to implement the seven properties of highly secured devices in Azure Sphere, for the in-depth discussion of each of these best practices and how they—along with the seven properties themselves—guided our design decisions.

We hope that the discussion of these best practices sheds some additional light on the large number of features the Azure Sphere team implemented to protect IoT devices. We also hope that this provides a new set of questions to consider in evaluating your own IoT solution. Azure Sphere will continue to innovate and build upon this foundation with more features that raise the bar in IoT security.

To read previous blogs on IoT security, visit our blog series:  https://www.microsoft.com/security/blog/iot-security/   Be sure to bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity

The post Cybersecurity best practices to implement highly secured devices appeared first on Microsoft Security.

Microsoft Build brings new innovations and capabilities to keep developers and customers secure

May 19th, 2020 No comments

As both organizations and developers adapt to the new reality of working and collaborating in a remote environment, it’s more important than ever to ensure that their experiences are secure and trusted. As part of this week’s Build virtual event, we’re introducing new Identity innovation to help foster a secure and trustworthy app ecosystem, as well as announcing a number of new capabilities in Azure to help secure customers.

New Identity capabilities to help foster a secure apps ecosystem

As organizations continue to adapt to the new requirements of remote work, we’ve seen an increase in the deployment and usage of cloud applications. These cloud applications often need access to user or company data, which has increased the need to provide strong security not just for users but applications themselves. Today we are announcing several capabilities for developers, admins, and end-users that help foster a secure and trustworthy app ecosystem:

  1. Publisher Verification allows developers to demonstrate to customers, with a verified checkmark, that the application they’re using comes from a trusted and authentic source. Applications marked as publisher verified means that the publisher has verified their identity through the verification process with the Microsoft Partner Network (MPN) and has associated their MPN account with their application registration.
  2. Application consent policies allow admins to configure policies that determine which applications users can consent to. Admins can allow users to consent to applications that have been Publisher Verified, helping developers unlock user-driven adoption of their apps.
  3. Microsoft authentication libraries (MSAL) for Angular is generally available and our web library identity.web for ASP.NET Core is in public preview. MSAL make it easy to implement the right authentication patterns, security features, and integration points that support any Microsoft identity—from Azure Active Directory (Azure AD) accounts to Microsoft accounts.

In addition, we’re making it easier for organizations and developers to secure, manage and build apps that connect with different types of users outside an organization with Azure AD External Identities now in preview. With Azure AD External Identities, developers can build flexible, user-centric experiences that enable self-service sign-up and sign-in and allow continuous customization without duplicating coding effort.

You can learn even more about our Identity-based solutions and additional announcements by heading over to the Azure Active Directory Tech Community blog and reading Alex Simons’ post.

Azure Security Center innovations

Azure Security Center is a unified infrastructure security management system for both Azure and hybrid cloud resources on-premises or in other clouds. We’re pleased to announce two new innovations for Azure Security Center, both of which will help secure our customers:

First, we’re announcing that the Azure Secure Score API is now available to customers, bringing even more innovation to Secure Score, which is a central component of security posture management in Azure Security Center. The recent enhancements to Secure Score (in preview) gives customers an easier to understand and more effective way to assess risk in their environment and prioritize which action to take first in order to reduce it.  It also simplifies the long list of findings by grouping the recommendations into a set of Security Controls, each representing an attack surface and scored accordingly.

Second, we’re announcing that suppression rules for Azure Security Center alerts are now publicly available. Customers can use suppression rules to reduce alerts fatigue and focus on the most relevant threats by hiding alerts that are known to be innocuous or related to normal activities in their organization. Suppressed alerts will be hidden in Azure Security Center and Azure Sentinel but will still be available with ‘dismissed’ state. You can learn more about suppression rules by visiting Suppressing alerts from Azure Security Center’s threat protection.

Azure Disk Encryption and encryption & key management updates

We continue to invest in encryption options for our customers. Here are our most recent updates:

  1. Fifty more Azure services now support customer-managed keys for encryption at rest. This helps customers control their encryption keys to meet their compliance or regulatory requirements. The full list of services is here. We have now made this capability part of the Azure Security Benchmark, so that our customers can govern use of all your Azure services in a consistent manner.
  2. Azure Disk Encryption helps protect data on disks that are used with VM and VM Scale sets, and we have now added the ability to use Azure Disk Encryption to secure Red Hat Enterprise Linux BYOS Gold Images. The subscription must be registered before Azure Disk Encryption can be enabled.

Azure Key Vault innovation

Azure Key Vault is a unified service for secret management, certificate management, and encryption key management, backed by FIPS-validated hardware security modules (HSMs). Here are some of the new capabilities we are bringing for our customers:

  1. Enhanced security with Private Link—This is an optional control that enables customers to access their Azure Key Vault over a private endpoint in their virtual network. Traffic between their virtual network and Azure Key Vault flows over the Microsoft backbone network, thus providing additional assurance.
  2. More choices for BYOK—Some of our customers generate encryption keys outside Azure and import them into Azure Key Vault, in order to meet their regulatory needs or to centralize where their keys are generated. Now, in addition to nCipher nShield HSMs, they can also use SafeNet Luna HSMs or Fortanix SDKMS to generate their keys. These additions are in preview.
  3. Make it easier to rotate secrets—Earlier we released a public preview of notifications for keys, secrets, and certificates. This allows customers to receive events at each point of the lifecycle of these objects and define custom actions. A common action is rotating secrets on a schedule so that they can limit the impact of credential exposure. You can see the new tutorial here.

Platform security innovation

Platform security for customers’ data recently took a big step forward with the General Availability of Azure Confidential Computing. Using the latest Intel SGX CPU hardware backed by attestation, Azure provides a new class of VMs that protects the confidentiality and integrity of customer data while in memory (or “in-use”), ensuring that cloud administrators and datacenter operators with physical access to the servers cannot access the customer’s data.

Customer Lockbox for Microsoft Azure provides an interface for customers to review and approve or reject customer data access requests. It is used in cases where a Microsoft engineer needs to access customer data during a support request. In addition to expanded coverage of services in Customer Lockbox for Microsoft Azure, this feature is now available in preview for our customers in Azure Government cloud.

You can learn more about our Azure security offerings by heading to the Azure Security Center Tech Community.

The post Microsoft Build brings new innovations and capabilities to keep developers and customers secure appeared first on Microsoft Security.

Operational resilience in a remote work world

May 18th, 2020 No comments

Microsoft CEO Satya Nadella recently said, “We have seen two years’ worth of digital transformation in two months.” This is a result of many organizations having to adapt to the new world of document sharing and video conferencing as they become distributed organizations overnight.

At Microsoft, we understand that while the current health crisis we face together has served as this forcing function, some organizations might not have been ready for this new world of remote work, financially or organizationally. Just last summer, a simple lightning strike caused the U.K.’s National Grid to suffer the biggest blackout in decades. It affected homes across the country, shut down traffic signals, and closed some of the busiest train stations in the middle of the Friday evening rush hour. Trains needed to be manually rebooted causing delays and disruptions. And, when malware shut down the cranes and security gates at Maersk shipping terminals, as well as most of the company’s IT network—from the booking site to systems handling cargo manifests, it took two months to rebuild all the software systems, and three months before all cargo in transit was tracked down—with recovery dependent on a single server having been accidentally offline during the attack due to the power being cut off.

Cybersecurity provides the underpinning to operationally resiliency as more organizations adapt to enabling secure remote work options, whether in the short or long term. And, whether natural or manmade, the difference between success or struggle to any type of disruption requires a strategic combination of planning, response, and recovery. To maintain cyber resilience, one should be regularly evaluating their risk threshold and an organization’s ability to operationally execute the processes through a combination of human efforts and technology products and services.

While my advice is often a three-pronged approach of turning on multi-factor authentication (MFA)—100 percent of your employees, 100 percent of the time—using Secure Score to increase an organization’s security posture and having a mature patching program that includes containment and isolation of devices that cannot be patched, we must also understand that not every organization’s cybersecurity team may be as mature as another.

Organizations must now be able to provide their people with the right resources so they are able to securely access data, from anywhere, 100 percent of the time. Every person with corporate network access, including full-time employees, consultants, and contractors, should be regularly trained to develop a cyber-resilient mindset. They shouldn’t just adhere to a set of IT security policies around identity-based access control, but they should also be alerting IT to suspicious events and infections as soon as possible to help minimize time to remediation.

Our new normal means that risks are no longer limited to commonly recognized sources such as cybercriminals, malware, or even targeted attacks. Moving to secure remote work environment, without a resilience plan in place that does not include cyber resilience increases an organization’s risk.

Before COVID, we knew that while a majority of firms have a disaster recovery plan on paper, nearly a quarter never test that, and only 42 percent of global executives are confident their organization could recover from a major cyber event without it affecting their business.

Operational resilience cannot be achieved without a true commitment to, and investment in, cyber resilience. We want to help empower every organization on the planet by continuing to share our learnings to help you reach the state where core operations and services won’t be disrupted by geopolitical or socioeconomic events, natural disasters, or even cyber events.

Learn more about our guidance related to COVID-19 here, and bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Operational resilience in a remote work world appeared first on Microsoft Security.

Open-sourcing new COVID-19 threat intelligence

May 14th, 2020 No comments

A global threat requires a global response. While the world faces the common threat of COVID-19, defenders are working overtime to protect users all over the globe from cybercriminals using COVID-19 as a lure to mount attacks. As a security intelligence community, we are stronger when we share information that offers a more complete view of attackers’ shifting techniques. This more complete view enables us all to be more proactive in protecting, detecting, and defending against attacks.

At Microsoft, our security products provide built-in protections against these and other threats, and we’ve published detailed guidance to help organizations combat current threats (Responding to COVID-19 together). Our threat experts are sharing examples of malicious lures and we have enabled guided hunting of COVID-themed threats using Azure Sentinel Notebooks. Microsoft processes trillions of signals each day across identities, endpoint, cloud, applications, and email, which provides visibility into a broad range of COVID-19-themed attacks, allowing us to detect, protect, and respond to them across our entire security stack. Today, we take our COVID-19 threat intelligence sharing a step further by making some of our own indicators available publicly for those that are not already protected by our solutions. Microsoft Threat Protection (MTP) customers are already protected against the threats identified by these indicators across endpoints with Microsoft Defender Advanced Threat Protection (ATP) and email with Office 365 ATP.

In addition, we are publishing these indicators for those not protected by Microsoft Threat Protection to raise awareness of attackers’ shift in techniques, how to spot them, and how to enable your own custom hunting. These indicators are now available in two ways. They are available in the Azure Sentinel GitHub and through the Microsoft Graph Security API. For enterprise customers who use MISP for storing and sharing threat intelligence, these indicators can easily be consumed via a MISP feed.

This threat intelligence is provided for use by the wider security community, as well as customers who would like to perform additional hunting, as we all defend against malicious actors seeking to exploit the COVID crisis.

This COVID-specific threat intelligence feed represents a start at sharing some of Microsoft’s COVID-related IOCs. We will continue to explore ways to improve the data over the duration of the crisis. While some threats and actors are still best defended more discreetly, we are committed to greater transparency and taking community feedback on what types of information is most useful to defenders in protecting against COVID-related threats. This is a time-limited feed. We are maintaining this feed through the peak of the outbreak to help organizations focus on recovery.

Protection in Azure Sentinel and Microsoft Threat Protection

Today’s release includes file hash indicators related to email-based attachments identified as malicious and attempting to trick users with COVID-19 or Coronavirus-themed lures. The guidance below provides instructions on how to access and integrate this feed in your own environment.

For Azure Sentinel customers, these indicators can be either be imported directly into Azure Sentinel using a Playbook or accessed directly from queries.

The Azure Sentinel Playbook that Microsoft has authored will continuously monitor and import these indicators directly into your Azure Sentinel ThreatIntelligenceIndicator table. This Playbook will match with your event data and generate security incidents when the built-in threat intelligence analytic templates detect activity associated to these indicators.

These indicators can also be accessed directly from Azure Sentinel queries as follows:

let covidIndicators = (externaldata(TimeGenerated:datetime, FileHashValue:string, FileHashType: string )
[@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/Microsoft.Covid19.Indicators.csv"]
with (format="csv"));
covidIndicators

Azure Sentinel logs.

A sample detection query is also provided in the Azure Sentinel GitHub. With the table definition above, it is as simple as:

  1. Join the indicators against the logs ingested into Azure Sentinel as follows:
covidIndicators
| join ( CommonSecurityLog | where TimeGenerated >= ago(7d)
| where isnotempty(FileHashValue)
) on $left.FileHashValue == $right.FileHash
  1. Then, select “New alert rule” to configure Azure Sentinel to raise incidents based on this query returning results.

CyberSecurityDemo in Azure Sentinel logs.

You should begin to see Alerts in Azure Sentinel for any detections related to these COVID threat indicators.

Microsoft Threat Protection provides protection for the threats associated with these indicators. Attacks with these Covid-19-themed indicators are blocked by Office 365 ATP and Microsoft Defender ATP.

While MTP customers are already protected, they can also make use of these indicators for additional hunting scenarios using the MTP Advanced Hunting capabilities.

Here is a hunting query to see if any process created a file matching a hash on the list.

let covidIndicators = (externaldata(TimeGenerated:datetime, FileHashValue:string, FileHashType: string )
[@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/Microsoft.Covid19.Indicators.csv"]
with (format="csv"))
| where FileHashType == 'sha256' and TimeGenerated > ago(1d);
covidIndicators
| join (DeviceFileEvents
| where Timestamp > ago(1d)
| where ActionType == 'FileCreated'
| take 100) on $left.FileHashValue  == $right.SHA256

Advanced hunting in Microsoft Defender Security Center.

This is an Advanced Hunting query in MTP that searches for any recipient of an attachment on the indicator list and sees if any recent anomalous log-ons happened on their machine. While COVID threats are blocked by MTP, users targeted by these threats may be at risk for non-COVID related attacks and MTP is able to join data across device and email to investigate them.

let covidIndicators = (externaldata(TimeGenerated:datetime, FileHashValue:string, FileHashType: string )    [@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/Microsoft.Covid19.Indicators.csv"] with (format="csv"))
| where FileHashType == 'sha256' and TimeGenerated > ago(1d);
covidIndicators
| join (  EmailAttachmentInfo  | where Timestamp > ago(1d)
| project NetworkMessageId , SHA256
) on $left.FileHashValue  == $right.SHA256
| join (
EmailEvents
| where Timestamp > ago (1d)
) on NetworkMessageId
| project TimeEmail = Timestamp, Subject, SenderFromAddress, AccountName = tostring(split(RecipientEmailAddress, "@")[0])
| join (
DeviceLogonEvents
| project LogonTime = Timestamp, AccountName, DeviceName
) on AccountName
| where (LogonTime - TimeEmail) between (0min.. 90min)
| take 10

Advanced hunting in Microsoft 365 security.

Connecting an MISP instance to Azure Sentinel

The indicators published on the Azure Sentinel GitHub page can be consumed directly via MISP’s feed functionality. We have published details on doing this at this URL: https://aka.ms/msft-covid19-misp. Please refer to the Azure Sentinel documentation on connecting data from threat intelligence providers.

Using the indicators if you are not an Azure Sentinel or MTP customer

Yes, the Azure Sentinel GitHub is public: https://aka.ms/msft-covid19-Indicators

Examples of phishing campaigns in this threat intelligence

The following is a small sample set of the types of COVID-themed phishing lures using email attachments that will be represented in this feed. Beneath each screenshot are the relevant hashes and metadata.

Figure 1: Spoofing WHO branding with “cure” and “vaccine” messaging with a malicious .gz file.

Name: CURE FOR CORONAVIRUS_pdf.gz

World Health Organization phishing email.

Figure 2: Spoofing Red Cross Safety Tips with malicious .docm file.

Name: COVID-19 SAFETY TIPS.docm

Red Cross phishing email.

Figure 3: South African banking lure promoting COVID-19 financial relief with malicious .html files.

Name: SBSA-COVID-19-Financial Relief.html

Financial relief phishing email.

Figure 4: French language spoofed correspondence from the WHO with malicious XLS Macro file.

Name:✉-Covid-19 Relief Plan5558-23636sd.htm

Coronavirus-themed phishing email.

If you have questions or feedback on this COVID-19 feed, please email msft-covid19-ti@microsoft.com.

The post Open-sourcing new COVID-19 threat intelligence appeared first on Microsoft Security.

Solving Uninitialized Stack Memory on Windows

May 13th, 2020 No comments

This blog post outlines the work that Microsoft is doing to eliminate uninitialized stack memory vulnerabilities from Windows and why we’re on this path. This blog post will be broken down into a few parts that folks can jump to: Uninitialized Memory Background Potential Solutions to Uninitialized Memory Vulnerabilities InitAll – Automatic Initialization Interesting Findings …

Solving Uninitialized Stack Memory on Windows Read More »

The post Solving Uninitialized Stack Memory on Windows appeared first on Microsoft Security Response Center.

Secured-core PCs help customers stay ahead of advanced data theft

May 13th, 2020 No comments

Researchers at the Eindhoven University of Technology recently revealed information around “Thunderspy,” an attack that relies on leveraging direct memory access (DMA) functionality to compromise devices. An attacker with physical access to a system can use Thunderspy to read and copy data even from systems that have encryption with password protection enabled.

Secured-core PCs provide customers with Windows 10 systems that come configured from OEMs with a set of hardware, firmware, and OS features enabled by default, mitigating Thunderspy and any similar attacks that rely on malicious DMA.

How Thunderspy works

Like any other modern attack, “Thunderspy” relies on not one but multiple building blocks being chained together to deliver protection from these kinds of targeted attacks. Below is a summary of how Thunderspy can be used to access a system where the attacker does not have the password needed to sign in. A video from the Thunderspy research team showing the attack is available here.

Step 1: A serial peripheral interface (SPI) flash programmer called Bus Pirate is plugged into the SPI flash of the device being attacked. This gives access to the Thunderbolt controller firmware and allows an attacker to copy it over to the attacker’s device

Step 2: The Thunderbolt Controller Firmware Patcher (tcfp), which is developed as part of Thunderspy, is used to disable the security mode enforced in the Thunderbolt firmware copied over using the Bus Pirate device in Step 1

Step 3: The modified insecure Thunderbolt firmware is written back to the SPI flash of the device being attacked

Step 4: A Thunderbolt-based attack device is connected to the device being attacked, leveraging the PCILeech tool to load a kernel module that bypasses the Windows sign-in screen

Diagram showing how the Thunderspy attack works

The result is that an attacker can access a device without knowing the sign-in password for the device. This means that even if a device was powered off or locked by the user, someone that could get physical access to the device in the time it takes to run the Thunderspy process could sign in and exfiltrate data from the system or install malicious software.

Secured-core PC protections

In order to counteract these targeted, modern attacks, Secured-core PCs use a defense-in-depth strategy that leverage features like Windows Defender System Guard and virtualization-based security (VBS) to mitigate risk across multiple areas, delivering comprehensive protection against attacks like Thunderspy.

Mitigating Steps 1 to 4 of the Thunderspy attack with Kernel DMA protection

Secured-core PCs ship with hardware and firmware that support Kernel DMA protection, which is enabled by default in the Windows OS. Kernel DMA protection relies on the Input/Output Memory Management Unit (IOMMU) to block external peripherals from starting and performing DMA unless an authorized user is signed in and the screen is unlocked. Watch this video from the 2019 Microsoft Ignite to see how Windows mitigates DMA attacks.

This means that even if an attacker was able to copy a malicious Thunderbolt firmware to a device, the Kernel DMA protection on a Secured-core PC would prevent any accesses over the Thunderbolt port unless the attacker gains the user’s password in addition to being in physical possession of the device, significantly raising the degree of difficulty for the attacker.

Hardening protection for Step 4 with Hypervisor-protected code integrity (HVCI)

Secured-core PCs ship with hypervisor protected code integrity (HVCI) enabled by default. HVCI utilizes the hypervisor to enable VBS and isolate the code integrity subsystem that verifies that all kernel code in Windows is signed from the normal kernel. In addition to isolating the checks, HVCI also ensures that kernel code cannot be both writable and executable, ensuring that unverified code does not execute.

HVCI helps to ensure that malicious kernel modules like the one used in Step 4 of the Thunderspy attack cannot execute easily as the kernel module would need to be validly signed, not revoked, and not rely on overwriting executable kernel code.

Modern hardware to combat modern threats

A growing portfolio of Secured-core PC devices from the Windows OEM ecosystem are available for customers. They provide a consistent guarantee against modern threats like Thunderspy with the variety of choices that customers expect to choose from when acquiring Windows hardware. You can learn more here: https://www.microsoft.com/en-us/windowsforbusiness/windows10-secured-core-computers

 

Nazmus Sakib

Enterprise and OS Security 

The post Secured-core PCs help customers stay ahead of advanced data theft appeared first on Microsoft Security.

Empowering your remote workforce with end-user security awareness

May 13th, 2020 No comments

COVID-19 has rapidly transformed how we all work. Organizations need quick and effective user security and awareness training to address the swiftly changing needs of the new normal for many of us. To help our customers deploy user training quickly, easily and effectively, we are announcing the availability of the Microsoft Cybersecurity Awareness Kit, delivered in partnership with Terranova Security. For those of you ready to deploy training right now, access your kit here. For more details, read on.

Work at home may happen on unmanaged and shared devices, over insecure networks, and in unauthorized or non-compliant apps. The new environment has put cybersecurity decision-making in the hands of remote employees. In addition to the rapid dissolution of corporate perimeters, the threat environment is evolving rapidly as malicious actors take advantage of the current situation to mount coronavirus-themed attacks. As security professionals, we can empower our colleagues to protect themselves and their companies. But choosing topics, producing engaging content, and managing delivery can be challenging, sucking up time and resources. Our customers need immediate deployable and context-specific security training.

CYBERSECURITY AWARENESS KIT

At RSA 2020 this year, we announced our partnership with Terranova Security, to deliver integrated phish simulation and user training in Office 365 Advanced Threat Protection later this year. Our partnership combines Microsoft’s leading-edge technology, expansive platform capabilities, and unparalleled threat insights with Terranova Security’s market-leading expertise, human-centric design and pedagogical rigor. Our intelligent solution will turbo-charge the effectiveness of phish simulation and training while simplifying administration and reporting. The solution will create and recommend context-specific and hyper-targeted simulations, enabling you to customize your simulations to mimic real threats seen in different business contexts and train users based on their risk level. It will automate simulation management from end to end, providing robust analytics to inform the next cycle of simulations and enable rich reporting.

Our Cybersecurity Awareness Kit now makes available a subset of this user-training material relevant to COVID-19 scenarios to aid security professionals tasked with training their newly remote workforces. The kit includes videos, interactive courses, posters, and infographics like the one below. You can use these materials to train your remote employees quickly and easily.

Beware of COVID-19 Cyber Scams

For Security Professionals, we have created a simple way to host and deliver the training material within your own environment or direct your users to the Microsoft 365 security portal, where the training are hosted as seen below. All authenticated Microsoft 365 users will be able to access the training on the portal. Admins will see the option to download the kit as well. Follow the simple steps, detailed in the README, to deploy the awareness kits to your remote workforce.

For Security Professionals, we have created a simple way to host and deliver the training material within your own environment or direct your users to the M365 security portal, where the trainings are hosted as seen below. All authenticated M365 users will be able to access the training on the portal. Admins will see the option to download the kit as well. Follow the simple steps, detailed in the README, to deploy the awareness kits to your remote workforce.

ACCESSING THE KIT

All Microsoft 365 customers can access the kit and directions on the Microsoft 365 Security and Compliance Center through this link. If you are not a Microsoft 365 customer or would like to share the training with family and friends who are not employees of your organization, Terranova Security is providing free training material for end-users.

Deploying quick and effective end-user training to empower your remote workforce is one of the ways Microsoft can help customers work productively and securely through COVID-19. For more resources to help you through these times, Microsoft’s Secure Remote Work Page for the latest information.

The post Empowering your remote workforce with end-user security awareness appeared first on Microsoft Security.

CISO stress-busters: post #1 overcoming obstacles

May 11th, 2020 No comments

As part of the launch of the U.S. space program’s moon shot, President Kennedy famously said we do these things “not because they are easy, but because they are hard.” The same can be said for the people responsible for security at their organizations; it is not a job one takes because it is easy. But it is critically important to keep our digital lives and work safe. And for the CISOs and leaders of the world, it is a job that is more than worth the hardships.

Recent research from Nominet paints a concerning picture of a few of those hardships. Forty-eight percent of CISO respondents indicated work stress had negatively impacted their mental health, this is almost double the number from last year’s survey. Thirty-one percent reported job stress had negatively impacted their physical health and 40 percent have seen their job stress impacting their personal lives. Add a fairly rapid churn rate (26 months on average) to all that stress and it’s clear CISOs are managing a tremendous amount of stress every day. And when crises hit, from incident response after a breach to a suddenly remote workforce after COVID-19, that stress only shoots higher.

Which is why we’re starting this new blog series called “CISO stress-busters.” In the words of CISOs from around the globe, we’ll be sharing insights, guidance, and support from peers on the front lines of the cyber workforce. Kicking us off—the main challenges that CISOs face and how they turn those obstacles into opportunity. The goal of the series is to be a bit of chicken (or chik’n for those vegans out there) soup for the CISO’s soul.

Today’s post features wisdom from three CISOs/Security Leaders:

  • TM Ching, Security CTO at DXC Technology
  • Jim Eckart, (former) CISO at Coca-Cola
  • Jason Golden, CISO at Mainstay Technologies

Clarifying contribution

Ask five different CEOs what their CISOs do and after the high level “manage security” answer you’ll probably get five very different explanations. This is partly because CISO responsibility can vary widely from company to company. So, it’s no surprise that many of the CISOs we interviewed touched on this point.

TM Ching summed it up this way, “Demonstrating my role to the organization can be a challenge—a role like mine may be perceived as symbolic” or that security is just here to “slow things down.” For Jason, “making sure that business leaders understand the difference between IT Operations, Cybersecurity, and InfoSec” can be difficult because execs “often think all of those disciplines are the same thing” and that since IT Ops has the products and solutions, they own security. Jim also bumped up against confusion about the security role with multiple stakeholders pushing and pulling in different directions like “a CIO who says ‘here is your budget,’ a CFO who says ‘why are you so expensive?’ and a general counsel who says ‘we could be leaking information everywhere.’”

What works:

  • Educate Execs—about the role of a CISO. Helping them “understand that it takes a program, that it’s a discipline.” One inflection point is after a breach, “you may be sitting there with an executive, the insurance company, their attorneys, maybe a forensics company and it always looks the same. The executive is looking down the table at the wide-eyed IT person saying ‘What happened?’” It’s a opportunity to educate, to help “make sure the execs understand the purpose of risk management.”—Jason Golden.   To see how to do this watch Microsoft CISO Series Episode 2 Part 1:  Security is everyone’s Business
  • Show Don’t Tell—“It is important to constantly demonstrate that I am here to help them succeed, and not to impose onerous compliance requirements that stall their projects.”—TM Ching
  • Accountability Awareness—CISOs do a lot, but one thing they shouldn’t do is to make risk decisions for the business in a vacuum. That’s why it’s critical to align “all stakeholders (IT, privacy, legal, financial, security, etc.) around the fact that cybersecurity and compliance are business risk issues and not IT issues. IT motions are (and should be) purely in response to the business’ decision around risk tolerance.”—Jim Eckart

Exerting influence

Fans of Boehm’s curve know that the earlier security can be introduced into a process, the less expensive it is to fix defects and flaws. But it’s not always easy for CISOs to get security a seat at the table whether it’s early in the ideation process for a new customer facing application or during financial negotiations to move critical workloads to the cloud. As TM put it, “Exerting influence to ensure that projects are secured at Day 0. This is possibly the hardest thing to do.” And because “some business owners do not take negative news very well” telling them their new app baby is “security ugly” the day before launch can be a gruesome task. And as Jason pointed out, “it’s one thing to talk hypothetically about things like configuration management and change management and here are the things that you need to do to meet those controls so you can keep your contract. It’s a different thing to get that embedded in operations so that IT and HR all the way through finance are following the rules for change management and configuration management.”

What Works:

  • Negotiate engagement—To avoid the last minute “gotchas” or bolting on security after a project has deployed, get into the conversation as early as possible. This isn’t easy, but as TM explains, it can be done. “It takes a lot of negotiations to convince stakeholders why it will be beneficial for them in the long run to take a pause and put the security controls in place, before continuing with their projects.”
  • Follow frameworks—Well-known frameworks like the NIST Cybersecurity Framework, NIST SP800-53, and SP800-37 can help CISOs “take things from strategy to operations” by providing baselines and best practices for building security into the entire organization and systems lifecycle. And that will pay off in the long run; “when the auditors come calling, they’re looking for evidence that you’re following your security model and embedding that throughout the organization.” —Jason

Cultivating culture

Wouldn’t it be wonderful if every company had a security mindset and understood the benefits of having a mature, well-funded security and risk management program? If every employee understood what a phish looks like and why they should report it? Unfortunately, most companies aren’t laser focused on security, leaving that education work up to the CISO and their team. And having those conversations with stakeholders that sometimes have conflicting agendas requires technical depth and robust communication skills. That’s not easy. As Jim points out, “it’s a daunting scope of topics to be proficient in at all levels.

What works:

  • Human firewalls—All the tech controls in the world won’t stop 100 percent of attacks, people need to be part of the solution too. “We can address administrative controls, technical controls, physical controls, but you also need to address the culture and human behavior, or the human firewalls. You know you’re only going to be marginally successful if you don’t engage employees too.” —Jason
  • Know your audience—CISOs need to cultivate “depth and breadth. On any given day, I needed to move from board-level conversations (where participants barely understand security) all the way to the depths of zero day vulnerabilities, patching, security architecture.” —Jim

Did you find these insights helpful? What would you tell your fellow CISOs about overcoming obstacles? What works for you? Please reach out to me on LinkedIn and let me know what you thought of this article and if you’re interested in being interviewed for one of our upcoming posts.

The post CISO stress-busters: post #1 overcoming obstacles appeared first on Microsoft Security.

Microsoft researchers work with Intel Labs to explore new deep learning approaches for malware classification

May 8th, 2020 No comments

The opportunities for innovative approaches to threat detection through deep learning, a category of algorithms within the larger framework of machine learning, are vast. Microsoft Threat Protection today uses multiple deep learning-based classifiers that detect advanced threats, for example, evasive malicious PowerShell.

In continued exploration of novel detection techniques, researchers from Microsoft Threat Protection Intelligence Team and Intel Labs are collaborating to study new applications of deep learning for malware classification, specifically:

  • Leveraging deep transfer learning technique from computer vision to static malware classification
  • Optimizing deep learning techniques in terms of model size and leveraging platform hardware capabilities to improve execution of deep-learning malware detection approaches

For the first part of the collaboration, the researchers built on Intel’s prior work on deep transfer learning for static malware classification and used a real-world dataset from Microsoft to ascertain the practical value of approaching the malware classification problem as a computer vision task. The basis for this study is the observation that if malware binaries are plotted as grayscale images, the textural and structural patterns can be used to effectively classify binaries as either benign or malicious, as well as cluster malicious binaries into respective threat families.

The researchers used an approach that they called static malware-as-image network analysis (STAMINA). Using the dataset from Microsoft, the study showed that the STAMINA approach achieves high accuracy in detecting malware with low false positives.

The results and further technical details of the research are listed in the paper STAMINA: Scalable deep learning approach for malware classification and set the stage for further collaborative exploration.

The role of static analysis in deep learning-based malware classification

While static analysis is typically associated with traditional detection methods, it remains to be an important building block for AI-driven detection of malware. It is especially useful for pre-execution detection engines: static analysis disassembles code without having to run applications or monitor runtime behavior.

Static analysis produces metadata about a file. Machine learning classifiers on the client and in the cloud then analyze the metadata and determine whether a file is malicious. Through static analysis, most threats are caught before they can even run.

For more complex threats, dynamic analysis and behavior analysis build on static analysis to provide more features and build more comprehensive detection. Finding ways to perform static analysis at scale and with high effectiveness benefits overall malware detection methodologies.

To this end, the research borrowed knowledge from  computer vision domain to build an enhanced static malware detection framework that leverages deep transfer learning to train directly on portable executable (PE) binaries represented as images.

Analyzing malware represented as image

To establish the practicality of the STAMINA approach, which posits that malware can be classified at scale by performing static analysis on malware codes represented as images, the study covered three main steps: image conversion, transfer learning, and evaluation.

Diagram showing the steps for the STAMINA approach: pre-processing, transfer learning, and evaluation

First, the researchers prepared the binaries by converting them into two-dimensional images. This step involved pixel conversion, reshaping, and resizing. The binaries were converted into a one-dimensional pixel stream by assigning each byte a value between 0 and 255, corresponding to pixel intensity. Each pixel stream was then transformed into a two-dimensional image by using the file size to determine the width and height of the image.

The second step was to use transfer learning, a technique for overcoming the isolated learning paradigm and utilizing knowledge acquired for one task to solve related ones. Transfer learning has enjoyed tremendous success within several different computer vision applications. It accelerates training time by bypassing the need to search for optimized hyperparameters and different architectures—all this while maintaining high classification performance. For this study, the researchers used Inception-v1 as the base model.

The study was performed on a dataset of 2.2 million PE file hashes provided by Microsoft. This dataset was temporally split into 60:20:20 segments for training, validation, and test sets, respectively.

Diagram showing a DNN with pre-trained weights on natural images, and the last portion fine-tuned with new data

Finally, the performance of the system was measured and reported on the holdout test set. The metrics captured include recall at specific false positive range, along with accuracy, F1 score, and area under the receiver operating curve (ROC).

Findings

The joint research showed that applying STAMINA to real-world hold-out test data set achieved a recall of 87.05% at 0.1% false positive rate, and 99.66% recall and 99.07% accuracy at 2.58% false positive rate overall. The results certainly encourage the use of deep transfer learning for the purpose of malware classification. It helps accelerate training by bypassing the search for optimal hyperparameters and architecture searches, saving time and compute resources in the process.

The study also highlights the pros and cons of sample-based methods like STAMINA and metadata-based classification methods. For example, STAMINA can go in-depth into samples and extract additional signals that might not be captured in the metadata.  However, for bigger size applications, STAMINA becomes less effective due to limitations in converting billions of pixels into JPEG images and then resizing them. In such cases, metadata-based methods show advantages over our research.

Conclusion and future work

The use of deep learning methods for detecting threats drives a lot of innovation across Microsoft. The collaboration with Intel Labs researchers is just one of the ways in which Microsoft researchers and data scientists continue to explore novel ways to improve security overall.

This joint research is a good starting ground for more collaborative work. For example, the researchers plan to collaborate further on platform acceleration optimizations that can allow deep learning models to be deployed on client machines with minimal performance impact. Stay tuned.

 

Jugal Parikh, Marc Marino

Microsoft Threat Protection Intelligence Team

 

The post Microsoft researchers work with Intel Labs to explore new deep learning approaches for malware classification appeared first on Microsoft Security.

Protect your accounts with smarter ways to sign in on World Passwordless Day

May 7th, 2020 No comments

As the world continues to grapple with COVID-19, our lives have become increasingly dependent on digital interactions. Operating at home, we’ve had to rely on e-commerce, telehealth, and e-government to manage the everyday business of life. Our daily online usage has increased by over 20 percent. And if we’re fortunate enough to have a job that we can do from home, we’re accessing corporate apps from outside the company firewall.

Whether we’re signing into social media, mobile banking, or our workplace, we’re connecting via online accounts that require a username and password. The more we do online, the more accounts we have. It becomes a hassle to constantly create new passwords and remember them. So, we take shortcuts. According to a Ponemon Institute study, people reuse an average of five total passwords, both business and personal. This is one aspect of human nature that hackers bet on. If they get hold of one password, they know they can use it pry open more of our digital lives. A single compromised password, then, can create a chain reaction of liability.

No matter how strong or complex a password is, it’s useless if a bad actor can socially engineer it away from us or find it on the dark web. Plus, passwords are inconvenient and a drain on productivity. People spend hours each year signing into applications and recovering or resetting forgotten usernames and passwords. This activity doesn’t make things more secure. It only drives up the costs of service desks.

People today are done with passwords

Users want something easier and more convenient. Administrators want something more secure. We don’t think anyone finds passwords a cause to celebrate. That’s why we’re helping organizations find smarter ways to sign in that users will love and hackers will hate. Our hope is that instead of World Password Day, we’ll start celebrating World Passwordless Day.

Passwordless Day Infographic

  • People reuse an average of five passwords across their accounts, both business and personal (Ponemon Institute survey/Yubico).
  • Average person has 90 accounts (Thycotic).
  • Fifty-five percent would prefer a method of protecting accounts that doesn’t involve passwords (Ponemon Institute survey/Yubico).
  • Sixty-seven percent of American consumers surveyed by Visa have used biometric authentication and prefer it to passwords.
  • One-hundred million to 150 million people using a passwordless method each month (Microsoft research, April 2020).

Since an average of one in every 250 corporate accounts is compromised each month, we know that relying on passwords isn’t a good enterprise defense strategy. As companies continue to add more business applications to their portfolios, the cost of passwords only goes up. In fact, companies are dedicating 30 to 60 percent of their support desk calls to password resets. Given how ineffective passwords can be, it’s surprising how many companies haven’t turned on multi-factor authentication (MFA) for their customers or employees.

Passwordless technology is here—and users are adopting it as the best experience for strong authentication. Last November at Microsoft Ignite, we shared that more than 100 million people were already signing in using passwordless methods each month. That number has now reached over 150 million people. According to our recent survey, the use of biometrics

We now have the momentum to push forward initiatives that increase security and reduce cost. New passwordless technologies give users the benefits of MFA in one gesture. To sign in securely with Windows Hello, all you have to do is show your face or press your finger. Microsoft has built support for passwordless authentication into our products and services, including Office, Azure, Xbox, and Github. You don’t even need to create a username anymore—you can use your phone number instead. Administrators can use single sign-on in Azure Active Directory (Azure AD) to enable passwordless authentication for an unlimited number of apps through native functionality in Windows Hello, the phone-as-a-token capabilities in the Microsoft Authenticator app, or security keys built using the FIDO2 open standards.

Of course, we would never advise our customers to try anything we haven’t tried ourselves. We’re always our own first customer. Microsoft’s IT team switched to passwordless authentication and now 90 percent of Microsoft employees sign in without entering a password. As a result, hard and soft costs of supporting passwords fell by 87 percent. We expect other customers will experience similar benefits in employee productivity improvements, lower IT costs, and a stronger security posture. To learn more about our approach, watch the CISO spotlight episode with Bret Arsenault (Microsoft CISO) and myself. By taking this approach 18 months ago, we were better set up for seamless secure remote work during COVID 19.

For many of us, working from home will be a new norm for the foreseeable future. We see many opportunities for using passwordless methods to better secure digital accounts that people rely on every day. Whether you’re protecting an organization or your own digital life, every step towards passwordless is a step towards improving your security posture. Now let’s embrace the world of passwordless!

Related articles

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Protect your accounts with smarter ways to sign in on World Passwordless Day appeared first on Microsoft Security.

How to gain 24/7 detection and response coverage with Microsoft Defender ATP

May 6th, 2020 No comments

This blog post is part of the Microsoft Intelligence Security Association guest blog series. To learn more about MISA, go here.

Whether you’re a security team of one or a dozen, detecting and stopping threats around the clock is a challenge. Security incidents don’t happen exclusively during business hours: attackers often wait until the late hours of the night to breach an environment.

At Red Canary, we work with security teams of all shapes and sizes to improve detection and response capabilities. Our Security Operations Team investigates threats in customer environments 24/7/365, removes false positives, and delivers confirmed threats with context. We’ve seen teams run into a wide range of issues when trying to establish after-hours coverage on their own, including:

  • For global enterprises, around-the-clock monitoring can significantly increase the pressure on a U.S.–based security team. If you have personnel around the world, a security team in a single time zone isn’t sufficient to cover the times that computing assets are used in those environments.
  • In smaller companies that don’t have global operations, the security team is more likely to be understaffed and unable to handle 24/7 security monitoring without stressful on-call schedules.
  • For the security teams of one, being “out of office” is a foreign concept. You’re always on. And you need to set up some way to monitor the enterprise while you’re away.

Microsoft Defender Advanced Threat Protection (ATP) is an industry leading endpoint security solution that’s built into Windows with extended capabilities to Mac and Linux servers. Red Canary unlocks the telemetry delivered from Microsoft Defender ATP and investigates every alert, enabling you to immediately increase your detection coverage and waste no time with false positives.

Here’s how those who haven’t started with Red Canary yet can answer the question, “How can I support my 24/7 security needs with Microsoft Defender ATP?”

No matter how big your security team is, the most important first step is notifying the right people based on an on-call schedule. In this post, we’ll describe two different ways of getting Microsoft Defender ATP alerts to your team 24×7 and how Red Canary has implemented this for our customers.

Basic 24/7 via email

Microsoft Defender Security Center allows you to send all Microsoft Defender ATP alerts to an email address. You can set up email alerts under Settings → Alert notifications.

MISA1

Email notification settings in Microsoft Defender Security Center.

These emails will be sent to your team and should be monitored for high severity situations after-hours.

If sent to a ticketing system, these emails can trigger tickets or after-hours pages to be created for your security team. We recommend limiting the alerts to medium and high severity so that you won’t be bothered for informational or low alerts.

MISA2

Setting up alert emails in Microsoft Defender ATP to be sent to a ticketing system.

Now any future alerts will create a new ticket in your ticketing system where you can assign security team members to on-call rotations and notify on-call personnel of new alerts (if supported). Once the notification is received by on-call personnel, they would then log into Microsoft Defender’s Security Center for further investigation and triage. 

Enhanced 24/7 via APIs

What if you want to ingest alerts to a system that doesn’t use email? You can do this by using the Microsoft Defender ATP APIs. First, you’ll need to have an authentication token. You can get the token like we do here:

MISA3

API call to retrieve authentication token.

Once you’ve stored the authentication token you can use it to poll the Microsoft Defender ATP API and retrieve alerts from Microsoft Defender ATP. Here’s an example of the code to pull new alerts.

MISA4

API call to retrieve alerts from Microsoft Defender ATP.

The API only returns a subset of the data associated with each alert. Here’s an example of what you might receive.

MISA5

Example of a Microsoft Defender ATP alert returned from the API.

You can then take this data and ingest it into any of your internal tools. You can learn more about how to access Microsoft Defender ATP APIs in the documentation. Please note, the limited information included in an alert email or API response is not enough to triage the behavior. You will still need to log into the Microsoft Defender Security Center to find out what happened and take appropriate action.

24/7 with Red Canary

By enabling Red Canary, you supercharge your Microsoft Defender ATP deployment by adding a proven 24×7 security operations team who are masters at finding and stopping threats, and an automation platform to quickly remediate and get back to business.

Red Canary continuously ingests all of the raw telemetry generated from your instance of Microsoft Defender ATP as the foundation for our service. We also ingest and monitor Microsoft Defender ATP alerts. We then apply thousands of our own proprietary analytics to identify potential threats that are sent 24/7 to a Red Canary detection engineer for review.

Here’s an overview of the process (to go behind the scenes of these operations check out our detection engineering blog series):

MISA6

Managed detection and response with Red Canary.

Red Canary is monitoring your Microsoft Defender ATP telemetry and alerts. If anything is a confirmed threat, our team creates a detection and sends it to you using a built-in automation framework that supports email, SMS, phone, Microsoft Teams/Slack, and more. Below is an example of what one of those detections might look like.

MISA7

Red Canary confirms threats and prioritizes them so you know what to focus on.

At the top of the detection timeline you’ll receive a short description of what happened. The threat has already been examined by a team of detection engineers from Red Canary’s Cyber Incident Response Team (CIRT), so you don’t have to worry about triage or investigation. As you scroll down, you can quickly see the results of the investigation that Red Canary’s senior detection engineers have done on your behalf, including detailed notes that provide context to what’s happening in your environment:

MISA8

Notes from Red Canary senior detection engineers (in light blue) provide valuable context.

You’re only notified of true threats and not false positives. This means you can focus on responding rather than digging through data to figure out what happened.

What if you don’t want to be woken up, you’re truly unavailable, or you just want bad stuff immediately dealt with? Use Red Canary’s automation to handle remediation on the fly. You and your team can create playbooks in your Red Canary portal to respond to threats immediately, even if you’re unavailable.

MISA9

Red Canary automation playbook.

This playbook allows you to isolate the endpoint (using the Machine Action resource type in the Microsoft Defender ATP APIs) if Red Canary identifies suspicious activity. You also have the option to set up Automate playbooks that depend on an hourly schedule. For example, you may want to approve endpoint isolation during normal work hours, but use automatic isolation overnight:

MISA10

Red Canary Automate playbook to automatically remediate a detection.

Getting started with Red Canary

Whether you’ve been using Microsoft Defender ATP since it’s preview releases or if you’re just getting started, Red Canary is the fastest way to accelerate your security operations program. Immediate onboarding, increased detection coverage, and a 24/7 CIRT team are all at your fingertips.

Terence Jackson, CISO at Thycotic and Microsoft Defender ATP user, describes what it’s like working with Red Canary:

“I have a small team that has to protect a pretty large footprint. I know the importance of detecting, preventing, and stopping problems at the entry point, which is typically the endpoint. We have our corporate users but then we also have SaaS customers we have to protect. Currently my team tackles both, so for me it’s simply having a trusted partner that can take the day-to-day hunting/triage/elimination of false positives and only provide actionable alerts/intel, which frees my team up to do other critical stuff.”

Red Canary is the fastest way to enhance your detection coverage from Microsoft Defender ATP so you know exactly when and where to respond.

Contact us to see a demo and learn more.

The post How to gain 24/7 detection and response coverage with Microsoft Defender ATP appeared first on Microsoft Security.

Azure Sphere Security Research Challenge Now Open

May 5th, 2020 No comments

The Azure Sphere Security Research Challenge is an expansion of Azure Security Lab, announced at Black Hat in August 2019. At that time, a select group of talented researchers was invited to come and do their worst, emulating criminal hackers in a customer-safe cloud environment. This new research challenge aims to spark new high impact …

Azure Sphere Security Research Challenge Now Open Read More »

The post Azure Sphere Security Research Challenge Now Open appeared first on Microsoft Security Response Center.

Azure Sphere Security Research Challenge Now Open

May 5th, 2020 No comments

The Azure Sphere Security Research Challenge is an expansion of Azure Security Lab, announced at Black Hat in August 2019. At that time, a select group of talented researchers was invited to come and do their worst, emulating criminal hackers in a customer-safe cloud environment. This new research challenge aims to spark new high impact …

Azure Sphere Security Research Challenge Now Open Read More »

The post Azure Sphere Security Research Challenge Now Open appeared first on Microsoft Security Response Center.

Lessons learned from the Microsoft SOC—Part 3c: A day in the life part 2

May 5th, 2020 No comments

This is the sixth blog in the Lessons learned from the Microsoft SOC series designed to share our approach and experience from the front lines of our security operations center (SOC) protecting Microsoft and our Detection and Response Team (DART) helping our customers with their incidents. For a visual depiction of our SOC philosophy, download our Minutes Matter poster.

COVID-19 and the SOC

Before we conclude the day in the life, we thought we would share an analyst’s eye view of the impact of COVID-19. Our analysts are mostly working from home now and our cloud based tooling approach enabled this transition to go pretty smoothly. The differences in attacks we have seen are mostly in the early stages of an attack with phishing lures designed to exploit emotions related to the current pandemic and increased focus on home firewalls and routers (using techniques like RDP brute-forcing attempts and DNS poisoning—more here). The attack techniques they attempt to employ after that are fairly consistent with what they were doing before.

A day in the life—remediation

When we last left our heroes in the previous entry, our analyst had built a timeline of the potential adversary attack operation. Of course, knowing what happened doesn’t actually stop the adversary or reduce organizational risk, so let’s remediate this attack!

  1. Decide and act—As the analyst develops a high enough level of confidence that they understand the story and scope of the attack, they quickly shift to planning and executing cleanup actions. While this appears as a separate step in this particular description, our analysts often execute on cleanup operations as they find them.

Big Bang or clean as you go?

Depending on the nature and scope of the attack, analysts may clean up attacker artifacts as they go (emails, hosts, identities) or they may build a list of compromised resources to clean up all at once (Big Bang)

  • Clean as you go—For most typical incidents that are detected early in the attack operation, analysts quickly clean up the artifacts as we find them. This rapidly puts the adversary at a disadvantage and prevents them from moving forward with the next stage of their attack.
  • Prepare for a Big Bang—This approach is appropriate for a scenario where an adversary has already “settled in” and established redundant access mechanisms to the environment (frequently seen in incidents investigated by our Detection and Response Team (DART) at customers). In this case, analysts should avoid tipping off the adversary until full discovery of all attacker presence is discovered as surprise can help with fully disrupting their operation. We have learned that partial remediation often tips off an adversary, which gives them a chance to react and rapidly make the incident worse (spread further, change access methods to evade detection, inflict damage/destruction for revenge, cover their tracks, etc.).Note that cleaning up phishing and malicious emails can often be done without tipping off the adversary, but cleaning up host malware and reclaiming control of accounts has a high chance of tipping off the adversary.

These are not easy decisions to make and we have found no substitute for experience in making these judgement calls. The collaborative work environment and culture we have built in our SOC helps immensely as our analysts can tap into each other’s experience to help making these tough calls.

The specific response steps are very dependent on the nature of the attack, but the most common procedures used by our analysts include:

  • Client endpoints—SOC analysts can isolate a computer and contact the user directly (or IT operations/helpdesk) to have them initiate a reinstallation procedure.
  • Server or applications—SOC analysts typically work with IT operations and/or application owners to arrange rapid remediation of these resources.
  • User accounts—We typically reclaim control of these by disabling the account and resetting password for compromised accounts (though these procedures are evolving as a large amount of our users are mostly passwordless using Windows Hello or another form of MFA). Our analysts also explicitly expire all authentication tokens for the user with Microsoft Cloud App Security.
    Analysts also review the multi-factor phone number and device enrollment to ensure it hasn’t been hijacked (often contacting the user), and reset this information as needed.
  • Service Accounts—Because of the high risk of service/business impact, SOC analysts work with the service account owner of record (falling back on IT operations as needed) to arrange rapid remediation of these resources.
  • Emails—The attack/phishing emails are deleted (and sometimes cleared to prevent recovering of deleted emails), but we always save a copy of original email in the case notes for later search and analysis (headers, content, scripts/attachments, etc.).
  • Other—Custom actions can also be executed based on the nature of the attack such as revoking application tokens, reconfiguring servers and services, and more.

Automation and integration for the win

It’s hard to overstate the value of integrated tools and process automation as these bring so many benefits—improving the analysts daily experience and improving the SOC’s ability to reduce organizational risk.

  • Analysts spend less time on each incident, reducing the attacker’s time to operation—measured by mean time to remediate (MTTR).
  • Analysts aren’t bogged down in manual administrative tasks so they can react quickly to new detections (reducing mean time to acknowledge—MTTA).
  • Analysts have more time to engage in proactive activities that both reduce organization risk and increase morale by keeping them focused on the mission.

Our SOC has a long history of developing our own automation and scripts to make analysts lives easier by a dedicated automation team in our SOC. Because custom automation requires ongoing maintenance and support, we are constantly looking for ways to shift automation and integration to capabilities provided by Microsoft engineering teams (which also benefits our customers). While still early in this journey, this approach typically improves the analyst experience and reduces maintenance effort and challenges.

This is a complex topic that could fill many blogs, but this takes two main forms:

  • Integrated toolsets save analysts manual effort during incidents by allowing them to easily navigate multiple tools and datasets. Our SOC relies heavily on the integration of Microsoft Threat Protection (MTP) tools for this experience, which also saves the automation team from writing and supporting custom integration for this.
  • Automation and orchestration capabilities reduce manual analyst work by automating repetitive tasks and orchestrating actions between different tools. Our SOC currently relies on an advanced custom SOAR platform and is actively working with our engineering teams (MTP’s AutoIR capability and Azure Sentinel SOAR) on how to shift our learnings and workload onto those capabilities.

After the attacker operation has been fully disrupted, the analyst marks the case as remediated, which is the timestamp signaling the end of MTTR measurement (which started when the analyst began the active investigation in step 2 of the previous blog).

While having a security incident is bad, having the same incident repeated multiple times is much worse.

  1. Post-incident cleanup—Because lessons aren’t actually “learned” unless they change future actions, our analysts always integrate any useful information learned from the investigation back into our systems. Analysts capture these learnings so that we avoid repeating manual work in the future and can rapidly see connections between past and future incidents by the same threat actors. This can take a number of forms, but common procedures include:
    • Indicators of Compromise (IoCs)—Our analysts record any applicable IoCs such as file hashes, malicious IP addresses, and email attributes into our threat intelligence systems so that our SOC (and all customers) can benefit from these learnings.
    • Unknown or unpatched vulnerabilities—Our analysts can initiate processes to ensure that missing security patches are applied, misconfigurations are corrected, and vendors (including Microsoft) are informed of “zero day” vulnerabilities so that they can create security patches for them.
    • Internal actions such as enabling logging on assets and adding or changing security controls. 

Continuous improvement

So the adversary has now been kicked out of the environment and their current operation poses no further risk. Is this the end? Will they retire and open a cupcake bakery or auto repair shop? Not likely after just one failure, but we can consistently disrupt their successes by increasing the cost of attack and reducing the return, which will deter more and more attacks over time. For now, we must assume that adversaries will try to learn from what happened on this attack and try again with fresh ideas and tools.

Because of this, our analysts also focus on learning from each incident to improve their skills, processes, and tooling. This continuous improvement occurs through many informal and formal processes ranging from formal case reviews to casual conversations where they tell the stories of incidents and interesting observations.

As caseload allows, the investigation team also hunts proactively for adversaries when they are not on shift, which helps them stay sharp and grow their skills.

This closes our virtual shift visit for the investigation team. Join us next time as we shift to our Threat hunting team (a.k.a. Tier 3) and get some hard won advice and lessons learned.

…until then, share and enjoy!

P.S. If you are looking for more information on the SOC and other cybersecurity topics, check out previous entries in the series (Part 1 | Part 2a | Part 2b | Part 3a | Part 3b), Mark’s List (https://aka.ms/markslist), and our new security documentation site—https://aka.ms/securtydocs. Be sure to bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. Or reach out to Mark on LinkedIn or Twitter.

The post Lessons learned from the Microsoft SOC—Part 3c: A day in the life part 2 appeared first on Microsoft Security.

Mitigating vulnerabilities in endpoint network stacks

May 4th, 2020 No comments

The skyrocketing demand for tools that enable real-time collaboration, remote desktops for accessing company information, and other services that enable remote work underlines the tremendous importance of building and shipping secure products and services. While this is magnified as organizations are forced to adapt to the new environment created by the global crisis, it’s not a new imperative. Microsoft has been investing heavily in security, and over the years our commitment to building proactive security into products and services has only intensified.

To help deliver on this commitment, we continuously find ways to improve and secure Microsoft products. One aspect of our proactive security work is finding vulnerabilities and fixing them before they can be exploited. Our strategy is to take a holistic approach and drive security throughout the engineering lifecycle. We do this by:

  • Building security early into the design of features.
  • Developing tools and processes that proactively find vulnerabilities in code.
  • Introducing mitigations into Windows that make bugs significantly harder to exploit.
  • Having our world-class penetration testing team test the security boundaries of the product so we can fix issues before they can impact customers.

This proactive work ensures we are continuously making Windows safer and finding as many issues as possible before attackers can take advantage of them. In this blog post we will discuss a recent vulnerability that we proactively found and fixed and provide details on tools and techniques we used, including a new set of tools that we built internally at Microsoft. Our penetration testing team is constantly testing the security boundaries of the product to make it more secure, and we are always developing tools that help them scale and be more effective based on the evolving threat landscape. Our investment in fuzzing is the cornerstone of our work, and we are constantly innovating this tech to keep on breaking new ground.

Proactive security to prevent the next WannaCry

In the past few years, much of our team’s efforts have been focused on uncovering remote network vulnerabilities and preventing events like the WannaCry and NotPetya outbreaks. Some bugs we have recently found and fixed include critical vulnerabilities that could be leveraged to exploit common secure remote communication tools like RDP or create ransomware issues like WannaCry: CVE-2019-1181 and CVE-2019-1182 dubbed “DejaBlue“, CVE-2019-1226 (RCE in RDP Server), CVE-2020-0611 (RCE in RDP Client), and CVE-2019-0787 (RCE in RDP client), among others.

One of the biggest challenges we regularly face in these efforts is the sheer volume of code we analyze. Windows is enormous and continuously evolving 5.7 million source code files, with more than 3,500 developers doing 1,100 pull requests per day in 440 official branches. This rapid cadence and evolution allows us to add new features as well proactively drive security into Windows.

Like many security teams, we frequently turn to fuzzing to help us quickly explore and assess large codebases. Innovations we’ve made in our fuzzing technology have made it possible to get deeper coverage than ever before, resulting in the discovery of new bugs, faster. One such vulnerability is the remote code vulnerability (RCE) in Microsoft Server Message Block version 3 (SMBv3) tracked as CVE-2020-0796 and fixed on March 12, 2020.

In the following sections, we will share the tools and techniques we used to fuzz SMB, the root cause of the RCE vulnerability, and relevant mitigations to exploitation.

Fully deterministic person-in-the-middle fuzzing

We use a custom deterministic full system emulator tool we call “TKO” to fuzz and introspect Windows components.  TKO provides the capability to perform full system emulation and memory snapshottting, as well as other innovations.  As a result of its unique design, TKO provides several unique benefits to SMB network fuzzing:

  • The ability to snapshot and fuzz forward from any program state.
  • Efficiently restoring to the initial state for fast iteration.
  • Collecting complete code coverage across all processes.
  • Leveraging greater introspection into the system without too much perturbation.

While all of these actions are possible using other tools, our ability to seamlessly leverage them across both user and kernel mode drastically reduces the spin-up time for targets. To learn more, check out David Weston’s recent BlueHat IL presentation “Keeping Windows secure”, which touches on fuzzing, as well as the TKO tool and infrastructure.

Fuzzing SMB

Given the ubiquity of SMB and the impact demonstrated by SMB bugs in the past, assessing this network transfer protocol has been a priority for our team. While there have been past audits and fuzzers thrown against the SMB codebase, some of which postdate the current SMB version, TKO’s new capabilities and functionalities made it worthwhile to revisit the codebase. Additionally, even though the SMB version number has remained static, the code has not! These factors played into our decision to assess the SMB client/server stack.

After performing an initial audit pass of the code to understand its structure and dataflow, as well as to get a grasp of the size of the protocol’s state space, we had the information we needed to start fuzzing.

We used TKO to set up a fully deterministic feedback-based fuzzer with a combination of generated and mutated SMB protocol traffic. Our goal for generating or mutating across multiple packets was to dig deeper into the protocol’s state machine. Normally this would introduce difficulties in reproducing any issues found; however, our use of emulators made this a non-issue. New generated or mutated inputs that triggered new coverage were saved to the input corpus. Our team had a number of basic mutator libraries for different scenarios, but we needed to implement a generator. Additionally, we enabled some of the traditional Windows heap instrumentation using verifier, turning on page heap for SMB-related drivers.

We began work on the SMBv2 protocol generator and took a network capture of an SMB negotiation with the aim of replaying these packets with mutations against a Windows 10, version 1903 client. We added a mutator with basic mutations (e.g., bit flips, insertions, deletions, etc.) to our fuzzer and kicked off an initial run while we continued to improve and develop further.

Figure 1. TKO fuzzing workflow

A short time later, we came back to some compelling results. Replaying the first crashing input with TKO’s kdnet plugin revealed the following stack trace:

> tkofuzz.exe repro inputs\crash_6a492.txt -- kdnet:conn 127.0.0.1:50002

Figure 2. Windbg stack trace of crash

We found an access violation in srv2!Smb2CompressionDecompress.

Finding the root cause of the crash

While the stack trace suggested that a vulnerability exists in the decompression routine, it’s the parsing of length counters and offsets from the network that causes the crash. The last packet in the transaction needed to trigger the crash has ‘\xfcSMB’ set as the first bytes in its header, making it a COMPRESSION_TRANSFORM packet.

Figure 3. COMPRESSION_TRANSFORM packet details

The SMBv2 COMPRESSION_TRANSFORM packet starts with a COMPRESSION_TRANSFORM_HEADER, which defines where in the packet the compressed bytes begin and the length of the compressed buffer.

typedef struct _COMPRESSION_TRANSFORM_HEADER

{

UCHAR   Protocol[4]; // Contains 0xFC, 'S', 'M', 'B'

ULONG    OriginalMessageSize;

USHORT AlgorithmId;

USHORT Flags;

ULONG Length;

}

In the srv2!Srv2DecompressData in the graph below, we can find this COMPRESSION_TRANSFORM_HEADER struct being parsed out of the network packet and used to determine pointers being passed to srv2!SMBCompressionDecompress.

Figure 4. Srv2DecompressData graph

We can see that at 0x7e94, rax points to our network buffer, and the buffer is copied to the stack before the OriginalCompressedSegmentSize and Length are parsed out and added together at 0x7ED7 to determine the size of the resulting decompressed bytes buffer. Overflowing this value causes the decompression to write its results out of the bounds of the destination SrvNet buffer, in an out-of-bounds write (OOBW).

Figure 5. Overflow condition

Looking further, we can see that the Length field is parsed into esi at 0x7F04, added to the network buffer pointer, and passed to CompressionDecompress as the source pointer. As Length is never checked against the actual number of received bytes, it can cause decompression to read off the end of the received network buffer. Setting this Length to be greater than the packet length also causes the computed source buffer length passed to SmbCompressionDecompress to underflow at 0x7F18, creating an out-of-bounds read (OOBR) vulnerability. Combining this OOBR vulnerability with the previous OOBW vulnerability creates the necessary conditions to leak addresses and create a complete remote code execution exploit.

Figure 6. Underflow condition

Windows 10 mitigations against remote network vulnerabilities

Our discovery of the SMBv3 vulnerability highlights the importance of revisiting protocol stacks regularly as our tools and techniques continue to improve over time. In addition to the proactive hunting for these types of issues, the investments we made in the last several years to harden Windows 10 through mitigations like address space layout randomization (ASLR), Control Flow Guard (CFG), InitAll, and hypervisor-enforced code integrity (HVCI) hinder trivial exploitation and buy defenders time to patch and protect their networks.

For example, turning vulnerabilities like the ones discovered in SMBv3 into working exploits requires finding writeable kernel pages at reliable addresses, a task that requires heap grooming and corruption, or a separate vulnerability in Windows kernel address space layout randomization (ASLR). Typical heap-based exploits taking advantage of a vulnerability like the one described here would also need to make use of other allocations, but Windows 10 pool hardening helps mitigate this technique. These mitigations work together and have a cumulative effect when combined, increasing the development time and cost of reliable exploitation.

Assuming attackers gain knowledge of our address space, indirect jumps are mitigated by kernel-mode CFG. This forces attackers to either use data-only corruption or bypass Control Flow Guard via stack corruption or yet another bug. If virtualization-based security (VBS) and HVCI are enabled, attackers are further constrained in their ability to map and modify memory permissions.

On Secured-core PCs these mitigations are enabled by default.  Secured-core PCs combine virtualization, operating system, and hardware and firmware protection. Along with Microsoft Defender Advanced Threat Protection, Secured-core PCs provide end-to-end protection against advanced threats.

While these mitigations collectively lower the chances of successful exploitation, we continue to deepen our investment in identifying and fixing vulnerabilities before they can get into the hands of adversaries.

 

The post Mitigating vulnerabilities in endpoint network stacks appeared first on Microsoft Security.

Microsoft Threat Protection leads in real-world detection in MITRE ATT&CK evaluation

May 1st, 2020 No comments

The latest round of MITRE ATT&CK evaluations proved yet again that Microsoft customers can trust they are fully protected even in the face of such an advanced attack as APT29. When looking at protection results out of the box, without configuration changes, Microsoft Threat Protection (MTP):

  • Provided nearly 100 percent coverage across the attack chain stages.
  • Delivered leading out-of-box visibility into attacker activities, dramatically reducing manual work for SOCs vs. vendor solutions that relied on specific configuration changes.
  • Had the fewest gaps in visibility, diminishing attacker ability to operate undetected.

Beyond just detection and visibility, automation, prioritization, and prevention are key to stopping this level of advanced attack. During testing, Microsoft:

  • Delivered automated real-time alerts without the need for configuration changes or custom detections; Microsoft is one of only three vendors who did not make configuration changes or rely on delayed detections.
  • Flagged more than 80 distinct alerts, and used built-in automation to correlate these alerts into only two incidents that mirrored the two MITRE ATT&CK simulations, improving SOC analyst efficiency and reducing attacker dwell time and ability to persist.
  • Identified seven distinct steps during the attack in which our protection features, which were disabled during testing, would have automatically intervened to stop the attack.

Microsoft Threat Experts provided further in-depth context and recommendations for further investigation through our comprehensive in-portal forensics. The evaluation also proved how Microsoft Threat Protection goes beyond just simple visibility into attacks, but also records all stages of the attack in which MTP would have stepped in to block the attack and automatically remediate any affected assets.

While the test focused on endpoint detection and response, MITRE’s simulated APT29 attack spans multiple attack domains, creating opportunities to empower defenders beyond just endpoint protection. Microsoft expanded defenders’ visibility beyond the endpoint with Microsoft Threat Protection (MTP). MTP has been recognized by both Gartner and Forrester as having extended detection and response capabilities. MTP takes protection to the next level by combining endpoint protection from Microsoft Defender ATP (EDR) with protection for email and productivity tools (Office 365 ATP), identity (Azure ATP), and cloud applications (Microsoft Cloud App Security [MCAS]). Below, we will share a deep-dive analysis and explanation of how MTP successfully demonstrated novel optic and detection advantages throughout the MITRE evaluation that only our solution can provide.

Incident-based approach enables real-time threat prioritization and remediation

Analyzing the MITRE evaluation results from the lens of breadth and coverage, as the diagrams below show, MTP provided exceptional coverage for all but one of the 19 tested attack stages. This means that in real life, the SOC would have received alerts and given full visibility into each of the stages of the two simulated attack scenarios across initial access, deployment of tools, discovery, persistence, credential access, lateral movement, and exfiltration. In Microsoft Threat Protection, alerts carry with them rich context—including a detailed process tree showing the recorded activities (telemetry) that led to the detection, the assets involved, all supporting evidence, as well as a description of what the alert means and recommendations for SOC action. Note that true alerts are attributed in the MITRE evaluation with the “Alert” modifier, and not all items marked as “Tactic” or “Technique” are actual alerts.

MTP detection coverage across the attack kill-chain stages, with block opportunities.

Figure 1: MTP detection coverage across the attack kill-chain stages, with block opportunities.

Figure 1: MTP detection coverage across the attack kill-chain stages, with block opportunities.

Note: Step 10, persistence execution, is registered as a miss due to a software bug, discovered during the test, that restricted visibility on Step 10—“Persistence Execution.” These evaluations are a valuable opportunity to continually improve our product, and this bug was fixed shortly after testing completed.

The MITRE APT29 evaluation focused solely on detection of an advanced attack; it did not measure whether or not participants were able to also prevent an attack. However, we believe that real-world protection is more than just knowing that an attack occurred—prevention of the attack is a critical element. While protections were intentionally turned off to allow the complete simulation to run, using the audit-only prevention configuration, MTP also captured and documented where the attack would have been completely prevented, including—as shown in the diagram above – the very start of the breach, if protections had been left on.

Microsoft Threat Protection also demonstrated how it promotes SOC efficiency and reduces attacker dwell time and sprawl. SOC alert fatigue is a serious problem; raising a large volume of alerts to investigate does not help SOC analysts understand where to devote their limited time and resources. Detection and response products must prioritize the most important attacker actions with the right context in near real time.

In contrast to alert-only approaches, MTP’s incident-based approach automatically identifies complex links between attacker activities in different domains including endpoint, identity, and cloud applications at an altitude that only Microsoft can provide because we have optics into each of these areas. In this scenario, MTP connected seemingly unrelated alerts using supporting telemetry across domains into just two end-to-end incidents, dramatically simplifying prioritization, triage, and investigation. In real life, this also simplifies automated response and enables SOC teams to scale capacity and capabilities. MITRE addresses a similar problem with the “correlated” modifier on telemetry and alerts but does not reference incidents (just yet).

Figure 2: MTP portal showing 2nd day attack incident including correlated alerts and affected assets.

Figure 2: MTP portal showing 2nd day attack incident including correlated alerts and affected assets.

Figure 3: 2nd day incident with all correlated alerts for SOC efficiency, and the attack incident graph.

Figure 3: 2nd day incident with all correlated alerts for SOC efficiency, and the attack incident graph.

Microsoft is the leader in out-of-the-box performance

Simply looking at the number of simulation steps covered—or, alternatively, at the number of steps with no coverage, where less is more—the MITRE evaluation showed MTP provided the best protection with zero delays or configuration changes.

Microsoft believes protection must be durable without requiring a lot of SOC configuration changes (especially during an ongoing attack), and it should not create friction by delivering false positives.

The chart below shows Microsoft as the vendor with the least number of steps categorized as “None” (also referred to as “misses”) out of the box. The chart also shows the number of detections marked with “Configuration Change” modifier, which was done quite considerably, as well as delayed detections (“Delayed” modifier), which indicate in-flight modifications and latency in detections.

Microsoft is one of only three vendors that made no modifications or had any delays during the test.

Microsoft is one of only three vendors that made no modifications or had any delays during the test.

Similarly, when looking at visibility and coverage for the 57 MITRE ATT&CK techniques replicated during this APT29 simulation, Microsoft’s coverage shows top performance at 95 percent of the techniques covered, as shown in the chart below.

A product’s coverage of techniques is an important consideration for customers when evaluating security solutions, often with specific attacker(s) in mind, which in turn determines the attacker techniques they are most concerned with and, consequently, the coverage they most care about.
Figure 5: Coverage across all attack techniques in the evaluation.

Figure 5: Coverage across all attack techniques in the evaluation.

MTP provided unique detection and visibility across identity, cloud, and endpoints

The powerful capabilities of Microsoft Threat Protection originate from unique signals not just from endpoints but also from identity and cloud apps. This combination of capabilities provides coverage where other solutions may lack visibility. Below are three examples of sophisticated attacks simulated during the evaluation that span across domains (i.e., identity, cloud, endpoint) and showcase the unique visibility and unmatched detections provided by MTP:

  • Detecting the most dangerous lateral movement attack: Golden Ticket—Unlike other vendors, MTP’s unique approach for detecting Golden Ticket attacks does not solely rely on endpoint-based command-line sequences, PowerShell strings like “Invoke-Mimikatz”, or DLL-loading heuristics that can all be evaded by advanced attackers. MTP leverages direct optics into the Domain Controller via Azure ATP, the identity component of MTP. Azure ATP detects Golden Ticket attacks using a combination of machine learning and protocol heuristics by looking at anomalies such as encryption downgrade, forged authorization data, nonexistent account, ticket anomaly, and time anomaly. MTP is the only product that provided the SOC context of the encryption downgrade, together with the source and target machines, resources accessed, and the identities involved.
  • Exfiltration over alternative protocol: Catching and stopping attackers as they move from endpoint to cloud—MTP leverages exclusive signal from Microsoft Cloud App Security (MCAS), the cloud access security broker (CASB) component of MTP, which provides visibility and alerts for a large variety of cloud services, including OneDrive. Using the MCAS Conditional Access App Control mechanism, MTP was able to monitor cloud traffic for data exfiltration and raise an automatic alert when a ZIP archive with stolen files was exfiltrated to a remote OneDrive account controlled by the attacker. It is important to note the OneDrive account used by MITRE Redteam was unknown to the Microsoft team prior to being automatically detected during the evaluation.
  • Uncovering Remote System Discovery attacks that abuse LDAP—Preceding lateral movement, attackers commonly abuse the Lightweight Directory Access Protocol (LDAP) protocol to query user groups and user information. Microsoft introduced a powerful new sensor for unique visibility of LDAP queries, aiding security analyst investigation and allowing detection of suspicious patterns of LDAP activity. Through this sensor, Microsoft Defender ATP, the endpoint component of MTP, avoids reliance on PowerShell strings and snippets. Rather, Microsoft Defender ATP uses the structure and fields of each LDAP query originating from the endpoint to the Domain Controller (DC) to spot broad requests or suspicious queries for accounts and groups. Where possible, MTP also combines and correlates LDAP attacks detected on the endpoint by Microsoft Defender ATP with LDAP events seen on the DC by Azure ATP.

Figure 6: Golden Ticket alert based on optics on Domain Controller activity.

Figure 6: Golden Ticket alert based on optics on Domain Controller activity.

Figure 7: Suspicious LDAP activity detected using deep native OS sensor.

Figure 7: Suspicious LDAP activity detected using deep native OS sensor.

Microsoft Threat Experts: Threat context and hunting skills when and where needed

In this edition of MITRE ATT&CK evaluation, for the first time, Microsoft products were configured to take advantage of the managed threat hunting service Microsoft Threat Experts. Microsoft Threat Experts provides proactive hunting for the most important threats in the network, including human adversary intrusions, hands-on-keyboard attacks, or advanced attacks like cyberespionage. During the evaluation, the service operated with the same strategy normally used in real customer incidents: the goal is to send targeted attack notifications that provide real value to analysts with contextual analysis of the activities. Microsoft Threat Experts enriches security signals and raises the risk level appropriately so that the SOC can focus on what’s important, and breaches don’t go unnoticed.

Microsoft Threat Experts notifications stand out among other participating vendors as these notifications are fully integrated into the experience, incorporated into relevant incidents and connected to relevant events, alerts, and other evidence. Microsoft Threat Experts is enabling SOC teams to effortlessly and seamlessly receive and merge additional data and recommendations in the context of the incident investigation.

Figure 8: Microsoft Threat Experts alert integrates into the portal and provides hyperlinked rich context.

Figure 8: Microsoft Threat Experts alert integrates into the portal and provides hyperlinked rich context.

Transparency in testing is key to threat detection, prevention

Microsoft Threat Protection delivers real-world detection, response, and, ultimately, protection from advanced attacks, as demonstrated in the latest MITRE evaluation. Core to MITRE’s testing approach is emulating real-world attacks to understand whether solutions are able to adequately detect and respond to them. We saw that Microsoft Threat Protection provided clear detection across all categories and delivered additional context that shows the full scope of impact across an entire environment. MTP empowers customers not only to detect attacks, offering human experts as needed, and easily return to a secured state with automated remediation. As is true in the real world, our human Threat Experts were available on demand to provide even more context and help with.

We thank MITRE for the opportunity to contribute to the test with unique threat intelligence that only three participants stepped forward to share. Our unique intelligence and breadth of signal and visibility across the entire environment is what enables us to continuously score top marks. We look forward to participating in the next evaluation, and we welcome your feedback and partnership throughout our journey.

Thanks,

Moti and the entire Microsoft Threat Protection team

Related Links:

 

The post Microsoft Threat Protection leads in real-world detection in MITRE ATT&CK evaluation appeared first on Microsoft Security.

Zero Trust Deployment Guide for Microsoft Azure Active Directory

April 30th, 2020 No comments

Microsoft is providing a series of deployment guides for customers who have engaged in a Zero Trust security strategy. In this guide, we cover how to deploy and configure Azure Active Directory (Azure AD) capabilities to support your Zero Trust security strategy.

For simplicity, this document will focus on ideal deployments and configuration. We will call out the integrations that need Microsoft products other than Azure AD and we will note the licensing needed within Azure AD (Premium P1 vs P2), but we will not describe multiple solutions (one with a lower license and one with a higher license).

Azure AD at the heart of your Zero Trust strategy

Azure AD provides critical functionality for your Zero Trust strategy. It enables strong authentication, a point of integration for device security, and the core of your user-centric policies to guarantee least-privileged access. Azure AD’s Conditional Access capabilities are the policy decision point for access to resources based on user identity, environment, device health, and risk—verified explicitly at the point of access. In the following sections, we will showcase how you can implement your Zero Trust strategy with Azure AD.

Establish your identity foundation with Azure AD

A Zero Trust strategy requires that we verify explicitly, use least privileged access principles, and assume breach. Azure Active Directory can act as the policy decision point to enforce your access policies based on insights on the user, device, target resource, and environment. To do this, we need to put Azure Active Directory in the path of every access request—connecting every user and every app or resource through this identity control plane. In addition to productivity gains and improved user experiences from single sign-on (SSO) and consistent policy guardrails, connecting all users and apps provides Azure AD with the signal to make the best possible decisions about the authentication/authorization risk.

  • Connect your users, groups, and devices:
    Maintaining a healthy pipeline of your employees’ identities as well as the necessary security artifacts (groups for authorization and devices for extra access policy controls) puts you in the best place to use consistent identities and controls, which your users already benefit from on-premises and in the cloud:

    1. Start by choosing the right authentication option for your organization. While we strongly prefer to use an authentication method that primarily uses Azure AD (to provide you the best brute force, DDoS, and password spray protection), follow our guidance on making the decision that’s right for your organization and your compliance needs.
    2. Only bring the identities you absolutely need. For example, use going to the cloud as an opportunity to leave behind service accounts that only make sense on-premises; leave on-premises privileged roles behind (more on that under privileged access), etc.
    3. If your enterprise has more than 100,000 users, groups, and devices combined, we recommend you follow our guidance building a high performance sync box that will keep your life cycle up-to-date.
  • Integrate all your applications with Azure AD:
    As mentioned earlier, SSO is not only a convenient feature for your users, but it’s also a security posture, as it prevents users from leaving copies of their credentials in various apps and helps avoid them getting used to surrendering their credentials due to excessive prompting. Make sure you do not have multiple IAM engines in your environment. Not only does this diminish the amount of signal that Azure AD sees and allow bad actors to live in the seams between the two IAM engines, it can also lead to poor user experience and your business partners becoming the first doubters of your Zero Trust strategy. Azure AD supports a variety of ways you can bring apps to authenticate with it:

    1. Integrate modern enterprise applications that speak OAuth2.0 or SAML.
    2. For Kerberos and Form-based auth applications, you can integrate them using the Azure AD Application Proxy.
    3. If you publish your legacy applications using application delivery networks/controllers, Azure AD is able to integrate with most of the major ones (such as Citrix, Akamai, F5, etc.).
    4. To help migrate your apps off of existing/older IAM engines, we provide a number of resources—including tools to help you discover and migrate apps off of ADFS.
  • Automate provisioning to applications:
    Once you have your users’ identities in Azure AD, you can now use Azure AD to power pushing those user identities into your various cloud applications. This gives you a tighter identity lifecycle integration within those apps. Use this detailed guide to deploy provisioning into your SaaS applications.
  • Get your logging and reporting in order:
    As you build your estate in Azure AD with authentication, authorization, and provisioning, it’s important to have strong operational insights into what is happening in the directory. Follow this guide to learn how to to persist and analyze the logs from Azure AD either in Azure or using a SIEM system of choice.

Enacting the 1st principle: least privilege

Giving the right access at the right time to only those who need it is at the heart of a Zero Trust philosophy:

  • Plan your Conditional Access deployment:
    Planning your Conditional Access policies in advance and having a set of active and fallback policies is a foundational pillar of your Access Policy enforcement in a Zero Trust deployment. Take the time to configure your trusted IP locations in your environment. Even if you do not use them in a Conditional Access policy, configure these IPs informs the risk of Identity Protection mentioned above. Check out our deployment guidance and best practices for resilient Conditional Access policies.
  • Secure privileged access with privileged identity management:
    With privileged access, you generally take a different track to meeting the end users where they are most likely to need and use the data. You typically want to control the devices, conditions, and credentials that users use to access privileged operations/roles. Check out our detailed guidance on how to take control of your privileged identities and secure them. Keep in mind that in a digitally transformed organization, privileged access is not only administrative access, but also application owner or developer access that can change the way your mission critical apps run and handle data. Check out our detailed guide on how to use Privileged Identity Management (P2) to secure privileged identities.
  • Restrict user consent to applications:
    User consent to applications is a very common way for modern applications to get access to organizational resources. However, we recommend you restrict user consent and manage consent requests to ensure that no unnecessary exposure of your organization’s data to apps occurs. This also means that you need to review prior/existing consent in your organization for any excessive or malicious consent.
  • Manage entitlements (Azure AD Premium P2):
    With applications centrally authenticating and driven from Azure AD, you should streamline your access request, approval, and recertification process to make sure that the right people have the right access and that you have a trail of why users in your organization have the access they have. Using entitlement management, you can create access packages that they can request as they join different teams/project and that would assign them access to the associated resources (applications, SharePoint sites, group memberships). Check out how you can start a package. If deploying entitlement management is not possible for your organization at this time, we recommend you at least enable self-service paradigms in your organization by deploying self-service group management and self-service application access.

Enacting the 2nd principle: verify explicitly

Provide Azure AD with a rich set of credentials and controls that it can use to verify the user at all times.

  • Roll out Azure multi-factor authentication (MFA) (P1):
    This is a foundational piece of reducing user session risk. As users appear on new devices and from new locations, being able to respond to an MFA challenge is one of the most direct ways that your users can teach us that these are familiar devices/locations as they move around the world (without having administrators parse individual signals). Check out this deployment guide.
  • Enable Azure AD Hybrid Join or Azure AD Join:
    If you are managing the user’s laptop/computer, bringing that information into Azure AD and use it to help make better decisions. For example, you may choose to allow rich client access to data (clients that have offline copies on the computer) if you know the user is coming from a machine that your organization controls and manages. If you do not bring this in, you will likely choose to block access from rich clients, which may result in your users working around your security or using Shadow IT. Check out our resources for Azure AD Hybrid Join or Azure AD Join.
  • Enable Microsoft Intune for managing your users’ mobile devices (EMS):
    The same can be said about user mobile devices as laptops. The more you know about them (patch level, jailbroken, rooted, etc.) the more you are able to trust or mistrust them and provide a rationale for why you block/allow access. Check out our Intune device enrollment guide to get started.
  • Start rolling out passwordless credentials:
    With Azure AD now supporting FIDO 2.0 and passwordless phone sign-in, you can move the needle on the credentials that your users (especially sensitive/privileged users) are using on a day-to-day basis. These credentials are strong authentication factors that can mitigate risk as well. Our passwordless authentication deployment guide walks you through how to roll out passwordless credentials in your organization.

Enacting the 3rd principle: assume breach

Provide Azure AD with a rich set of credentials and controls that it can use to verify the user.

  • Deploy Azure AD Password Protection:
    While enabling other methods to verify users explicitly, you should not forget about weak passwords, password spray and breach replay attacks. Read this blog to find out why classic complex password policies are not tackling the most prevalent password attacks. Then follow this guidance to enable Azure AD Password Protection for your users in the cloud first and then on-premises as well.
  • Block legacy authentication:
    One of the most common attack vectors for malicious actors is to use stolen/replayed credentials against legacy protocols, such as SMTP, that cannot do modern security challenges. We recommend you block legacy authentication in your organization.
  • Enable identity protection (Azure AD Premium 2):
    Enabling identity protection for your users will provide you with more granular session/user risk signal. You’ll be able to investigate risk and confirm compromise or dismiss the signal which will help the engine understand better what risk looks like in your environment.
  • Enable restricted session to use in access decisions:
    To illustrate, let’s take a look at controls in Exchange Online and SharePoint Online (P1): When a user’s risk is low but they are signing in from an unknown device, you may want to allow them access to critical resources, but not allow them to do things that leave your organization in a non-compliant state. Now you can configure Exchange Online and SharePoint Online to offer the user a restricted session that allows them to read emails or view files, but not download them and save them on an untrusted device. Check out our guides for enabling limited access with SharePoint Online and Exchange Online.
  • Enable Conditional Access integration with Microsoft Cloud App Security (MCAS) (E5):
    Using signals emitted after authentication and with MCAS proxying requests to application, you will be able to monitor sessions going to SaaS Applications and enforce restrictions. Check out our MCAS and Conditional Access integration guidance and see how this can even be extended to on-premises apps.
  • Enable Microsoft Cloud App Security (MCAS) integration with identity protection (E5):
    Microsoft Cloud App Security is a UEBA product monitoring user behavior inside SaaS and modern applications. This gives Azure AD signal and awareness about what happened to the user after they authenticated and received a token. If the user pattern starts to look suspicious (user starts to download gigabytes of data from OneDrive or starts to send spam emails in Exchange Online), then a signal can be fed to Azure AD notifying it that the user seems to be compromised or high risk and on the next access request from this user; Azure AD can take correct action to verify the user or block them. Just enabling MCAS monitoring will enrich the identity protection signal. Check out our integration guidance to get started.
  • Integrate Azure Advanced Threat Protection (ATP) with Microsoft Cloud App Security:
    Once you’ve successfully deployed and configured Azure ATP, enable the integration with Microsoft Cloud App Security to bring on-premises signal into the risk signal we know about the user. This enables Azure AD to know that a user is indulging in risky behavior while accessing on-premises, non-modern resources (like File Shares) which can then be factored into overall user risk to block further access in the cloud. You will be able to see a combined Priority Score for each user at risk to give a holistic view of which ones your SOC should focus on.
  • Enable Microsoft Defender ATP (E5):
    Microsoft Defender ATP allows you to attest to Windows machines health and whether they are undergoing a compromise and feed that into mitigating risk at runtime. Whereas Domain Join gives you a sense of control, Defender ATP allows you to react to a malware attack at near real time by detecting patterns where multiple user devices are hitting untrustworthy sites and react by raising their device/user risk at runtime. See our guidance on configuring Conditional Access in Defender ATP.

Conclusion

We hope the above guides help you deploy the identity pieces central to a successful Zero Trust strategy. Make sure to check out the other deployment guides in the series by following the Microsoft Security blog.

The post Zero Trust Deployment Guide for Microsoft Azure Active Directory appeared first on Microsoft Security.

Data governance matters now more than ever

April 30th, 2020 No comments

Knowing, protecting, and governing your organizational data is critical to adhere to regulations and meet security and privacy needs. Arguably, that’s never been truer than it is today as we face these unprecedented health and economic circumstances. To help organizations to navigate privacy during this challenging time, Microsoft Chief Privacy Officer Julie Brill shared seven privacy principles to consider as we all collectively move forward in addressing the pandemic.

Organizations are also evaluating security and data governance more than ever before as they try to maintain business continuity amid the crisis. According to a new Harvard Business Review (HBR) research report released today commissioned by Microsoft, 61 percent of organizations struggle to effectively develop strong data security, privacy, and risk capabilities. Together with HBR, we surveyed close to 500 global business leaders across industries, including financial services, tech, healthcare, and manufacturing. The study found that 77 percent of organizations say an effective security, risk, and compliance strategy is essential for business success. However, 82 percent say that securing and governing data is becoming more difficult because of new risks and data management complexities brought on by digital transformation.

In a world in which remote work is the new normal, securing, and governing your company’s most critical data becomes more important than ever before. The increased volume of information and multiple collaboration systems create complexity for managing business records with serious cost and risk implications. As organizations across a variety of industries face ever-increasing regulations, many companies move data to different systems of record to manage them and comply with regulations. However, moving content to a different system, instead of managing it in place, can increase the risk of missing records or not declaring them properly. 

General availability of Microsoft 365 Records Management

Today, we are excited to announce the general availability of Microsoft 365 Records Management to provide you with significantly greater depth in protecting and governing critical data. With Records Management, you can:

  • Classify, retain, review, dispose, and manage content without compromising productivity or data security.
  • Leverage machine learning capabilities to identify and classify regulatory, legal, and business critical records at scale.
  • Help demonstrate compliance with regulations through defensible audit trails and proof of destruction.

You can now access Records Management in the compliance center in Microsoft 365.

Data governance matters now more than ever

Striking the right balance between data governance and productivity: Records Management is built into the Microsoft 365 productivity stack and existing customer workflows, easing the friction that often occurs between enforcing governance controls and user productivity. For example, say your team is working on a contract. Thanks to built-in retention policies embedded in the tools people use every day, they can continue to be productive while collaborating on a contract that has been declared a record—such as sharing, coauthoring, and accessing the record through mobile devices. We have also integrated our disposition process natively into the tools you use every day, including SharePoint and Outlook. Records versioning also makes collaboration on record-declared documents better, so you can track when edits are made to the contract. It allows users to unlock a document with a record label to make edits to it with all records safely retained and audit trails maintained. With Records Management, you can balance rigorous enforcement of data controls with allowing your organization to be fully productive.

Building trust, transparency, and defensibility: Building trust and providing transparency is crucial to managing records. In addition to continuing to audit all events surrounding a record in our audit log, we’re excited to announce the ability to obtain proof of disposal and see all items automatically disposed as part of a record label. Proof of disposal helps provide you with the defensibility you need, particularly to meet legal and regulatory requirements. Learn more in this Microsoft docs page.

Leveraging machine learning for scale: Records Management leverages our broader investments in machine learning across information protection and governance, such as trainable classifiers. With trainable classifiers, you can train the classification engine to recognize data that is unique to your organization. Once you define a record or retention label, you can apply the label to all content that matches a trainable classifier that was previously defined. So, for example, any document that appears to be a contract or have contract-related content will be marked accordingly and automatically classified as a record. For more information on creating trainable classifiers, please see this documentation. Apart from using trainable classifiers, you can also choose to auto-apply retention labels either by matching keywords on the content, its metadata, sensitive information it contains, or as the default for a particular location or folder. These different auto classification methods provide the flexibility you need to manage the constantly increasing volume of data.

Please visit this portal to learn more about Records Management.

Importance of information protection and governance

There’s never been a more important time to ensure your data, especially your most critical data, is protected and governed efficiently and effectively. Records Management is generally available worldwide today, and you can learn even more in our post on Tech Community. Eligible Microsoft 365 E5 customers can start using Records Management in the Compliance Center or learn how to try or buy a Microsoft 365 subscription.

Lastly, as you navigate this challenging time, we have additional resources to help. For more information about securing your organization in this time of crisis, you can visit our Remote Work site. We’re here to help in any way we can.

The post Data governance matters now more than ever appeared first on Microsoft Security.