Archive

Author Archive

Managing cybersecurity like a business risks: Part 1—Modeling opportunities and threats

May 28th, 2020 No comments

In recent years, cybersecurity has been elevated to a C-suite and board-level concern. This is appropriate given the stakes. Data breaches can have significant impact on a company’s reputation and profits. But, although businesses now consider cyberattacks a business risk, management of cyber risks is still siloed in technology and often not assessed in terms of other business drivers. To properly manage cybersecurity as a business risk, we need to rethink how we define and report on them.

The blog series, “Managing cybersecurity like a business risk,” will dig into how to update the cybersecurity risk definition, reporting, and management to align with business drivers. In today’s post, I’ll talk about why we need to model both opportunities as well as threats when we evaluate cyber risks. In future blogs, I’ll dig into some reporting tools that businesses can use to keep business leaders informed.

Digital transformation brings both opportunities and threats

Technology innovations such as artificial intelligence (AI), the cloud, and the internet of things (IoT) have disrupted many industries. Much of this disruption has been positive for businesses and consumers alike. Organizations can better tailor products and services to targeted segments of the population, and businesses have seized on these opportunities to create new business categories or reinvent old ones.

These same technologies have also introduced new threats. Legacy companies risk losing loyal customers by exploiting new markets. Digital transformation can result in a financial loss if big bets don’t pay off. And of course, as those of us in cybersecurity know well, cybercriminals and other adversaries have exploited the expanded attack surface and the mountains of data we collect.

The threats and opportunities of technology decisions are intertwined, and increasingly they impact not just operations but the core business. Too often decisions about digital transformation are made without evaluating cyber risks. Security is brought in at the very end to protect assets that are exposed. Cyber risks are typically managed from a standpoint of loss aversion without accounting for the possible gains of new opportunities. This approach can result in companies being either too cautious or not cautious enough. To maximize digital transformation opportunities, companies need good information that helps them take calculated risks.

It starts with a SWOT analysis

Threats and opportunities are external forces that may be factors for a company and all its competitors. One way to determine how your company should respond is by also understanding your weaknesses and strengths, which are internal factors.

  • Strengths: Characteristics or aspects of the organization or product that give it a competitive edge.
  • Weaknesses: Characteristics or aspects of the organization or product that puts it at a disadvantage compared to the competition.
  • Opportunities: Market conditions that could be exploited for benefit.
  • Threats: Market conditions that could cause damage or harm.

To crystallize these concepts, let’s consider a hypothetical brick and mortar retailer in the U.K. that sells stylish maternity clothes at an affordable price. In Europe, online retail is big business. Companies like ASOS and Zalando are disrupting traditional fashion. If we apply a SWOT analysis to them, it might look something like this.

  • Strength: Stylish maternity clothes sold at an affordable price, loyal referral-based clientele.
  • Weakness: Only available through brick and mortar stores, lack technology infrastructure to quickly go online, and lack security controls.
  • Opportunity: There is a market for these clothes beyond the U.K.
  • Threats: Retailers are a target for cyberattacks, customers trends indicate they will shop less frequently at brick and mortar stores in the future.

For this company, there isn’t an obvious choice. The retailer needs to figure out a way to maintain the loyalty of its current customers while preparing for a world where in-person shopping decreases. Ideally the company can use its strengths to overcome its weaknesses and confront threats. For example, the company’s loyal clients that already refer a lot of business could be incented to refer business via online channels to grow business. The company may also recognize that building security controls into an online business from the ground up is critical and take advantage of its steady customer base to buy some time and do it right.

Threat modeling and opportunity modeling paired together can help better define the potential gains and losses of different approaches.

Opportunity and threat modeling

Many cybersecurity professionals are familiar with threat modeling, which essentially poses the following questions, as recommended by the Electronic Frontier Foundation.

  • What do you want to protect?
  • Who do you want to protect it from?
  • How likely is it that you will need to protect it?
  • How bad are the consequences if you fail?
  • How much trouble are you willing to go through in order to try to prevent those?

But once we’ve begun to consider not just the threats but the opportunities available in each business decision, it becomes clear that this approach misses half the equation. Missed opportunity is a risk that isn’t captured in threat modeling. This is where opportunity modeling becomes valuable. Some of my thinking around opportunity modeling was inspired by a talk by John Sherwood at SABSA, and he suggested the following questions to effectively model opportunity:

  • What is the value of the asset you want to protect?
  • What is the potential gain of the opportunity?
  • How likely is it that the opportunity will be realized?
  • How likely is it that a strength be exploited?

This gives us a framework to consider the risk from both a threat and opportunity standpoint. Our hypothetical retailer knows it wants to protect the revenue generated by the current customers and referral model, which is the first question on each model. The other questions help quantify the potential loss if threats materialize and the potential gains of opportunities are realized. The company can use this information to better understand the ratio of risk to reward.

It’s never easy to make big decisions in light of potential risks, but when decisions are informed by considering both the potential gains and potential losses, you can also better define a risk management strategy, including the types of controls you will need to mitigate your risk.

In my next post in the “Managing cybersecurity like a business risk” series, I’ll review some qualitative and quantitative tools you can use to manage risk.

Read more about risk management from SABSA.  To learn more about Microsoft security solutions visit our website. In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Managing cybersecurity like a business risks: Part 1—Modeling opportunities and threats appeared first on Microsoft Security.

4 identity partnerships to help drive better security

May 28th, 2020 No comments

At Microsoft, we are committed to driving innovation for our partnerships within the identity ecosystem. Together, we are enabling our customers, who live and work in a heterogenous world, to get secure and remote access to the apps and resources they need. In this blog, we’d like to highlight how partners can help enable secure remote access to any app, access to on-prem and legacy apps, as well as how to secure seamless access via passwordless apps. We will also touch on how you can increase security visibility and insights by leveraging Azure Active Directory (Azure AD) Identity Protection APIs.

Secure remote access to cloud apps

As organizations adopt remote work strategies in today’s environment, it’s important their workforce has access to all the applications they need. With the Azure AD app gallery, we work closely with independent software vendors (ISV) to make it easy for organizations and their employees and customers to connect to and protect the applications they use. The Azure AD app gallery consists of thousands of applications that make it easy for admins to set up single sign-on (SSO) or user provisioning for their employees and customers. You can find popular collaboration applications to work remotely such Cisco Webex, Zoom, and Workplace from Facebook or security focused applications such as Mimecast, and Jamf. And if you don’t find the application your organization needs, you can always make a nomination here.

The Azure AD Gallery

The Azure AD Gallery.

Secure hybrid access to your on-premises and legacy apps

As organizations enable their employees to work from home, maintaining remote access to all company apps, including those on-premises and legacy, from any location and any device, is key to safeguard the productivity of their workforce. Azure AD offers several integrations for securing on-premises SaaS applications like SAP NetWeaver, SAP Fiori systems, Oracle PeopleSoft and E-Business Suite, and Atlassian JIRA and Confluence through the Azure AD App Gallery. For customers who are using Akamai Enterprise Application Access (EAA), Citrix Application Delivery Controller (ADC), F5 BIG-IP Access Policy Manager (APM), or Zscaler Private Access (ZPA), Microsoft has partnerships to provide remote access securely and help extend policies and controls that allow businesses to manage and govern on-premises legacy apps from Azure AD without having to change how the apps work.

Our integration with Zscaler allows a company’s business partners, such as suppliers and vendors, to securely access legacy, on-premises applications through the Zscaler B2B portal.

Integration with Zscaler

Go passwordless with FIDO2 security keys

Passwordless methods of authentication should be part of everyone’s future. Currently, Microsoft has over 100-million active passwordless end-users across consumer and enterprise customers. These passwordless options include Windows Hello for Business, Authenticator app, and FIDO2 security keys. Why are passwords falling out of favor? For them to be effective, passwords must have several characteristics, including being unique to every site. Trying to remember them all can frustrate end-users and lead to poor password hygiene.

Since Microsoft announced the public preview of Azure AD support for FIDO2 security keys in hybrid environments earlier this year, I’ve seen more organizations, especially with regulatory requirements, start to adopt FIDO2 security keys. This is another important area where we’ve worked with many FIDO2 security key partners who are helping our customers to go passwordless smoothly.

Partner logos

Increase security visibility and insights by leveraging Azure AD Identity Protection APIs

We know from our partners that they would like to leverage insights from the Azure AD Identity Protection with their security tools such as security information event management (SIEM) or network security. The end goal is to help them leverage all the security tools they have in an integrated way. Currently, we have the Azure AD Identity Protection API in preview that our ISVs leverage. For example, RSA announced at their 2020 conference that they are now leveraging our signals to better defend their customers.

We’re looking forward to working with many partners to complete these integrations.

If you haven’t taken advantage of any of these types of solutions, I recommend you try them out today and let us know what you think. If you have product partnership ideas with Azure AD, feel free to connect with me via LinkedIn or Twitter.

The post 4 identity partnerships to help drive better security appeared first on Microsoft Security.

Zero Trust Deployment Guide for devices

May 26th, 2020 No comments

The modern enterprise has an incredible diversity of endpoints accessing their data. This creates a massive attack surface, and as a result, endpoints can easily become the weakest link in your Zero Trust security strategy.

Whether a device is a personally owned BYOD device or a corporate-owned and fully managed device, we want to have visibility into the endpoints accessing our network, and ensure we’re only allowing healthy and compliant devices to access corporate resources. Likewise, we are concerned about the health and trustworthiness of mobile and desktop apps that run on those endpoints. We want to ensure those apps are also healthy and compliant and that they prevent corporate data from leaking to consumer apps or services through malicious intent or accidental means.

Get visibility into device health and compliance

Gaining visibility into the endpoints accessing your corporate resources is the first step in your Zero Trust device strategy. Typically, companies are proactive in protecting PCs from vulnerabilities and attacks, while mobile devices often go unmonitored and without protections. To help limit risk exposure, we need to monitor every endpoint to ensure it has a trusted identity, has security policies applied, and the risk level for things like malware or data exfiltration has been measured, remediated, or deemed acceptable. For example, if a personal device is jailbroken, we can block access to ensure that enterprise applications are not exposed to known vulnerabilities.

  1. To ensure you have a trusted identity for an endpoint, register your devices with Azure Active Directory (Azure AD). Devices registered in Azure AD can be managed using tools like Microsoft Endpoint Manager, Microsoft Intune, System Center Configuration Manager, Group Policy (hybrid Azure AD join), or other supported third-party tools (using the Intune Compliance API + Intune license). Once you’ve configured your policy, share the following guidance to help users get their devices registered—new Windows 10 devices, existing Windows 10 devices, and personal devices.
  2. Once we have identities for all the devices accessing corporate resources, we want to ensure that they meet the minimum security requirements set by your organization before access is granted. With Microsoft Intune, we can set compliance rules for devices before granting access to corporate resources. We also recommend setting remediation actions for noncompliant devices, such as blocking a noncompliant device or offering the user a grace period to get compliant.

Restricting access from vulnerable and compromised devices

Once we know the health and compliance status of an endpoint through Intune enrollment, we can use Azure AD Conditional Access to enforce more granular, risk-based access policies. For example, we can ensure that no vulnerable devices (like devices with malware) are allowed access until remediated, or ensure logins from unmanaged devices only receive limited access to corporate resources, and so on.

  1. To get started, we recommend only allowing access to your cloud apps from Intune-managed, domain-joined, and/or compliant devices. These are baseline security requirements that every device will have to meet before access is granted.
  2. Next, we can configure device-based Conditional Access policies in Intune to enforce restrictions based on device health and compliance. This will allow us to enforce more granular access decisions and fine-tune the Conditional Access policies based on your organization’s risk appetite. For example, we might want to exclude certain device platforms from accessing specific apps.
  3. Finally, we want to ensure that your endpoints and apps are protected from malicious threats. This will help ensure your data is better-protected and users are at less risk of getting denied access due to device health and/or compliance issues. We can integrate data from Microsoft Defender Advanced Threat Protection (ATP), or other Mobile Threat Defense (MTD) vendors, as an information source for device compliance policies and device Conditional Access rules. Options below:

Enforcing security policies on mobile devices and apps

We have two options for enforcing security policies on mobile devices: Intune Mobile Device Management (MDM) and Intune Mobile Application Management (MAM). In both cases, once data access is granted, we want to control what the user does with the data. For example, if a user accesses a document with a corporate identity, we want to prevent that document from being saved in an unprotected consumer storage location or from being shared with a consumer communication or chat app. With Intune MAM policies in place, they can only transfer or copy data within trusted apps such as Office 365 or Adobe Acrobat Reader, and only save it to trusted locations such as OneDrive or SharePoint.

Intune ensures that the device configuration aspects of the endpoint are centrally managed and controlled. Device management through Intune enables endpoint provisioning, configuration, automatic updates, device wipe, or other remote actions. Device management requires the endpoint to be enrolled with an organizational account and allows for greater control over things like disk encryption, camera usage, network connectivity, certificate deployment, and so on.

Mobile Device Management (MDM)

  1. First, using Intune, let’s apply Microsoft’s recommended security settings to Windows 10 devices to protect corporate data (Windows 10 1809 or later required).
  2. Ensure your devices are patched and up to date using Intune—check out our guidance for Windows 10 and iOS.
  3. Finally, we recommend ensuring your devices are encrypted to protect data at rest. Intune can manage a device’s built-in disk encryption across both macOS and Windows 10.

Meanwhile, Intune MAM is concerned with management of the mobile and desktop apps that run on endpoints. Where user privacy is a higher priority, or the device is not owned by the company, app management makes it possible to apply security controls (such as Intune app protection policies) at the app level on non-enrolled devices. The organization can ensure that only apps that comply with their security controls, and running on approved devices, can be used to access emails or files or browse the web.

With Intune, MAM is possible for both managed and unmanaged devices. For example, a user’s personal phone (which is not MDM-enrolled) may have apps that receive Intune app protection policies to contain and protect corporate data after it has been accessed. Those same app protection policies can be applied to apps on a corporate-owned and enrolled tablet. In that case, the app-level protections complement the device-level protections. If the device is also managed and enrolled with Intune MDM, you can choose not to require a separate app-level PIN if a device-level PIN is set, as part of the Intune MAM policy configuration.

Mobile Application Management (MAM)

  1. To protect your corporate data at the application level, configure Intune MAM policies for corporate apps. MAM policies offer several ways to control access to your organizational data from within apps:
    • Configure data relocation policies like save-as restrictions for saving organization data or restrict actions like cut, copy, and paste outside of organizational apps.
    • Configure access policy settings like requiring simple PIN for access or blocking managed apps from running on jailbroken or rooted devices.
    • Configure automatic selective wipe of corporate data for noncompliant devices using MAM conditional launch actions.
    • If needed, create exceptions to the MAM data transfer policy to and from approved third-party apps.
  2. Next, we want to set up app-based Conditional Access policies to ensure only approved corporate apps access corporate data.
  3. Finally, using app configuration (appconfig) policies, Intune can help eliminate app setup complexity or issues, make it easier for end users to get going, and ensure better consistency in your security policies. Check out our guidance on assigning configuration settings.

Conclusion

We hope the above helps you deploy and successfully incorporate devices into your Zero Trust strategy. Make sure to check out the other deployment guides in the series by following the Microsoft Security blog. For more information on Microsoft Security Solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Zero Trust Deployment Guide for devices appeared first on Microsoft Security.

Zero Trust and its role in securing the new normal

May 26th, 2020 No comments

As the global crisis around COVID-19 continues, security teams have been forced to adapt to a rapidly evolving security landscape. Schools, businesses, and healthcare organizations are all getting work done from home on a variety of devices and locations, extending the potential security attack surface.

While we continue to help our customers enable secure access to apps in this “new normal,” we’re also thinking about the road ahead and how there are still many organizations who will need to adapt their security model to support work life. This is especially important given that bad actors are using network access solutions like VPN as a trojan horse to deploy ransomware and the number of COVID-19 themed attacks have increased and evolved.

Microsoft and Zscaler have partnered to provide a glimpse into how security will change in a post-COVID-19 world.

Accelerating to Zero Trust

“We’ve seen two years’ worth of digital transformation in two months.”
—Satya Nadella, CEO, Microsoft

With the bulk of end users now working remotely, organizations were forced to consider alternate ways of achieving modern security controls. Legacy network architectures route all remote traffic through a central corporate datacenter are suddenly under enormous strain due to massive demand for remote work and rigid appliance capacity limitations. This creates latency for users, impacting productivity and requires additional appliances that can take 30, 60, or even 90 days just to be shipped out.

To avoid these challenges many organizations were able to enable work from home by transitioning their existing network infrastructure and capabilities with a Zero Trust security framework instead.

The Zero Trust framework empowers organizations to limit access to specific apps and resources only to the authorized users who are allowed to access them. The integrations between Microsoft Azure Active Directory (Azure AD) and Zscaler Private Access embody this framework.

For the companies who already had proof of concept underway for their Zero Trust journey, COVID-19 served as an accelerator, moving up the timelines for adoption. The ability to separate application access from network access, and secure application access based on identity and user context, such as date/time, geolocation, and device posture, was critical for IT’s ability to enable remote work. Cloud delivered technologies such as Azure AD and Zscaler Private Access (ZPA) have helped ensure fast deployment, scalability, and seamless experiences for remote users.

Both Microsoft and Zscaler anticipate that if not already moving toward a Zero Trust model, organizations will accelerate this transition and start to adopt one.

Securing flexible work going forward

While some organizations have had to support remote workers in the past, many are now forced to make the shift from a technical and cultural standpoint. As social distancing restrictions start to loosen, instead of remote everything we’ll begin to see organizations adopt more flexible work arrangements for their employees. Regardless of where employees are, they’ll need to be able to securely access any application, including the mission-critical “crown jewel” apps that may still be using legacy authentication protocols like HTTP or LDAP and on-premises. To simplify the management of protecting access to apps from a now flexible working style, there should be a single policy per user that can be used to provide access to an application, whether they are remote or at the headquarters

Zscaler Private Access and Azure AD help organizations enable single sign-on and enforce Conditional Access policies to ensure authorized users can securely access specifically the apps they need. This includes their mission-critical applications that run on-premises and may have SOC-2 and ISO27001 compliance needs.

Today, the combination of ZPA and Azure AD are already helping organizations adopt flexible work arrangements to ensure seamless and secure access to their applications.

Secure access with Zscaler and Microsoft

Remote onboarding or offboarding for a distributed workforce

With remote and flexible work arrangements becoming a norm, organizations will need to consider how to best onboard or offboard a distributed workforce and ensure the right access can be granted when employees join, change or leave roles. To minimize disruption, organizations will need to enable and secure Bring Your Own Devices (BYOD) or leverage solutions like Windows Autopilot that can help users set up new devices without any IT involvement.

To ensure employees can access applications on day one, automating the provisioning of user accounts to applications will be critical for productivity. The SCIM 2.0 standard, adopted by both Microsoft and Zscaler, can help automate simple actions, such as creating or updating users, adding users to groups, or deprovisioning users into applications. Azure AD user provisioning can help manage end-to-end identity lifecycle and automate policy-based provisioning and deprovisioning of user accounts for applications. The ZPA + Azure AD SCIM 2.0 configuration guide shows how this works.

Powering security going forward

Security and IT teams are already under strain with this new environment and adding an impending economic downturn into the equation means they’ll need to do more with less. The responsibility of selecting the right technology falls to the security leaders. Together, Microsoft and Zscaler can help deliver secure access to applications and data on all the devices accessing your network, while empowering employees with simpler, more productive experiences. This is the power of cloud and some of the industry’s deepest level of integrations. We look forward to working with on what your security might look like after COVID-19.

Stay safe.

For more information on Microsoft Zero Trust, visit our website: Zero Trust security framework. Learn more about our guidance related to COVID-19 here and bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Zero Trust and its role in securing the new normal appeared first on Microsoft Security.

Build support for open source in your organization

May 21st, 2020 No comments

Have you ever stared at the same lines of code for hours only to have a coworker identify a bug after just a quick glance? That’s the power of community! Open source software development is guided by the philosophy that a diverse community will produce higher quality code by allowing anyone to review and contribute. Individuals and large enterprises, like Microsoft, have embraced open source to engage people who can help make solutions better. However, not all open source projects are equivalent in quality or support. And, when it comes to security tools, many organizations hesitate to adopt open source. So how should you approach selecting and onboarding the right open source solutions for your organization? Why don’t we ask the community!

Earlier this year at the RSA 2020 Conference, I had the pleasure of sitting on the panel, Open Source: Promise, Perils, and the Path Ahead. Joining me were Inigo Merino, CEO of Cienaga Systems; Dr. Kelley Misata, CEO, Sightline Security; and Lenny Zeltser, CISO, Axonius. In addition to her role at Sightline Security, Kelley also serves as the President and Executive Director of the Open Information Security Foundation (OISF), which builds Suricata, an open source threat detection engine. Lenny created and maintains a Linux distribution called REMnux that organizations use for malware analysis. Ed Moyle, a Partner at SecurityCurve, served as the moderator. Today I’ll share our collective advice for selecting open source components and persuading organizations to approve them.

Which open source solutions are right for your project?

Nobody wants to reinvent the wheel—or should I say, Python—during a big project. You’ve got enough to do already! Often it makes sense to turn to pre-built open source components and libraries. They can save you countless hours, freeing up time to focus on the features that differentiate your product. But how should you decide when to opt for open source? When presented with numerous choices, how do you select the best open source solutions for your company and project? Here are some of the recommendations we discussed during the panel discussion.

  1. Do you have the right staff? A new environment can add complexity to your project. It helps if people on the team have familiarity with the tool or have time to learn it. If your team understands the code, you don’t have to wait for a fix from the community to address bugs. As Lenny said at the conference, “The advantage of open source is that you can get in there and see what’s going on. But if you are learning as you go, it may slow you down. It helps to have the knowledge and capability to support the environment.”
  2. Is the component widely adopted? If lots of developers are using the software, it’s more likely the code is stable. With more eyes on the code, problems get surfaced and resolved faster.
  3. How active is the community? Ideally, the library and components that you use will be maintained and enhanced for years after you deploy it, but there’s no guarantee—that’s also true for commercial options, by the way. An active community makes it more likely that the project will be supported. Check to see when the base was last updated. Confirm that members answer questions from users.
  4. Is there a long-term vision for the technology? Look for a published roadmap for the project. A roadmap will give you confidence that people are committed to supporting and enhancing the project. It will also help you decide if the open source project aligns with your product roadmap. “For us, a big signal is the roadmap. Does the community have a vision? Do they have a path to get there?” asked Kelley.
  5. Is there a commercial organization associated with the project? Another way to identify a project that is here for the long term is if there is a commercial interest associated with it. If a commercial company is providing funding or support to an open source project, it’s more likely that the support will continue even as community members change. Lenny told the audience, “If there is a commercial funding arm, that gives me peace of mind that the tool is less likely to just disappear.”

Getting legal (or executives or product owners) on board

Choosing the perfect open source solution for your project won’t help if you can’t persuade product owners, legal departments, or executives to approve it. Many organizations and individuals worry about the risks associated with using open source. They may wonder if legal issues will arise if they don’t use the software properly. If the software lacks support or includes security bugs will the component put the company at risk? The following tips can help you mitigate these concerns:

  1. Adopt change management methodologies: Organizational change is hard because the unknown feels riskier than the known. Leverage existing risk management structures to help your organization evaluate and adopt open source. Familiar processes will help others become more comfortable with new tools. As Inigo said, “Recent research shows that in order to get change through, you need to reduce the perceived risk of adopting said change. To lower those barriers, leverage what the organization already has in terms of governance to transform this visceral fear of the unknown into something that is known and can be managed through existing processes.”
  2. Implement component lifecycle management: Develop a process to determine which components are acceptable for people in your organization to use. By testing components or doing static and dynamic analysis, you reduce the level of risk and can build more confidence with executives.
  3. Identify a champion: If someone in your organization is responsible for mitigating concerns with product owners and legal teams, it will speed up the process.
  4. Enlist help from the open source project: Many open source communities include people who can help you make the business case to your approvers. As Kelley said, “It’s also our job in the open source community to help have these conversations. We can’t just sit passively by and let the enterprise folks figure it out. We need to evangelize our own message. There are many open source projects with people like Lenny and me who will help you make the case.”

Microsoft believes that the only way we can solve our biggest security challenges is to work together. Open source is one way to do that. Next time you look for an open source solution consider trying today’s tips to help you select the right tools and gain acceptance in your organization.

Learn more

Next month, I’ll follow up this post with more details on how to implement component lifecycle management at your organization.

In the meantime, explore some of Microsoft’s open source solutions, such as The Microsoft Graph Toolkit, DeepSpeed, misticpy, and Attack Surface Analyzer.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. Or reach out to me on LinkedIn or Twitter.

The post Build support for open source in your organization appeared first on Microsoft Security.

Success in security: reining in entropy

May 20th, 2020 No comments

Your network is unique. It’s a living, breathing system evolving over time. Data is created. Data is processed. Data is accessed. Data is manipulated. Data can be forgotten. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment. No two networks on the planet are exactly the same, even if they operate within the same industry, utilize the exact same applications, and even hire workers from one another. In fact, the only attribute your network may share with another network is simply how unique they are from one another.

If we follow the analogy of an organization or network as a living being, it’s logical to drill down deeper, into the individual computers, applications, and users that function as cells within our organism. Each cell is unique in how it’s configured, how it operates, the knowledge or data it brings to the network, and even the vulnerabilities each piece carries with it. It’s important to note that cancer begins at the cellular level and can ultimately bring down the entire system. But where incident response and recovery are accounted for, the greater the level of entropy and chaos across a system, the more difficult it becomes to locate potentially harmful entities. Incident Response is about locating the source of cancer in a system in an effort to remove it and make the system healthy once more.

Let’s take the human body for example. A body that remains at rest 8-10 hours a day, working from a chair in front of a computer, and with very little physical activity, will start to develop health issues. The longer the body remains in this state, the further it drifts from an ideal state, and small problems begin to manifest. Perhaps it’s diabetes. Maybe it’s high blood pressure. Or it could be weight gain creating fatigue within the joints and muscles of the body. Your network is similar to the body. The longer we leave the network unattended, the more it will drift from an ideal state to a state where small problems begin to manifest, putting the entire system at risk.

Why is this important? Let’s consider an incident response process where a network has been compromised. As a responder and investigator, we want to discover what has happened, what the cause was, what the damage is, and determine how best we can fix the issue and get back on the road to a healthy state. This entails looking for clues or anomalies; things that stand out from the normal background noise of an operating network. In essence, let’s identify what’s truly unique in the system, and drill down on those items. Are we able to identify cancerous cells because they look and act so differently from the vast majority of the other healthy cells?

Consider a medium-size organization with 5,000 computer systems. Last week, the organization was notified by a law enforcement agency that customer data was discovered on the dark web, dated from two weeks ago. We start our investigation on the date we know the data likely left the network. What computer systems hold that data? What users have access to those systems? What windows of time are normal for those users to interact with the system? What processes or services are running on those systems? Forensically we want to know what system was impacted, who was logging in to the system around the timeframe in question, what actions were performed, where those logins came from, and whether there are any unique indicators. Unique indicators are items that stand out from the normal operating environment. Unique users, system interaction times, protocols, binary files, data files, services, registry keys, and configurations (such as rogue registry keys).

Our investigation reveals a unique service running on a member server with SQL Server. In fact, analysis shows that service has an autostart entry in the registry and starts the service from a file in the c:\windows\perflogs directory, which is an unusual location for an autostart, every time the system is rebooted. We haven’t seen this service before, so we investigate against all the systems on the network to locate other instances of the registry startup key or the binary files we’ve identified. Out of 5,000 systems, we locate these pieces of evidence on only three systems, one of which is a Domain Controller.

This process of identifying what is unique allows our investigative team to highlight the systems, users, and data at risk during a compromise. It also helps us potentially identify the source of attacks, what data may have been pilfered, and foreign Internet computers calling the shots and allowing access to the environment. Additionally, any recovery efforts will require this information to be successful.

This all sounds like common sense, so why cover it here? Remember we discussed how unique your network is, and how there are no other systems exactly like it elsewhere in the world? That means every investigative process into a network compromise is also unique, even if the same attack vector is being used to attack multiple organizational entities. We want to provide the best foundation for a secure environment and the investigative process, now, while we’re not in the middle of an active investigation.

The unique nature of a system isn’t inherently a bad thing. Your network can be unique from other networks. In many cases, it may even provide a strategic advantage over your competitors. Where we run afoul of security best practice is when we allow too much entropy to build upon the network, losing the ability to differentiate “normal” from “abnormal.” In short, will we be able to easily locate the evidence of a compromise because it stands out from the rest of the network, or are we hunting for the proverbial needle in a haystack? Clues related to a system compromise don’t stand out if everything we look at appears abnormal. This can exacerbate an already tense response situation, extending the timeframe for investigation and dramatically increasing the costs required to return to a trusted operating state.

To tie this back to our human body analogy, when a breathing problem appears, we need to be able to understand whether this is new, or whether it’s something we already know about, such as asthma. It’s much more difficult to correctly identify and recover from a problem if it blends in with the background noise, such as difficulty breathing because of air quality, lack of exercise, smoking, or allergies. You can’t know what’s unique if you don’t already know what’s normal or healthy.

To counter this problem, we pre-emptively bring the background noise on the network to a manageable level. All systems move towards entropy unless acted upon. We must put energy into the security process to counter the growth of entropy, which would otherwise exponentially complicate our security problem set. Standardization and control are the keys here. If we limit what users can install on their systems, we quickly notice when an untrusted application is being installed. If it’s against policy for a Domain Administrator to log in to Tier 2 workstations, then any attempts to do this will stand out. If it’s unusual for Domain Controllers to create outgoing web traffic, then it stands out when this occurs or is attempted.

Centralize the security process. Enable that process. Standardize security configuration, monitoring, and expectations across the organization. Enforce those standards. Enforce the tenet of least privilege across all user levels. Understand your ingress and egress network traffic patterns, and when those are allowed or blocked.

In the end, your success in investigating and responding to inevitable security incidents depends on what your organization does on the network today, not during an active investigation. By reducing entropy on your network and defining what “normal” looks like, you’ll be better prepared to quickly identify questionable activity on your network and respond appropriately. Bear in mind that security is a continuous process and should not stop. The longer we ignore the security problem, the further the state of the network will drift from “standardized and controlled” back into disorder and entropy. And the further we sit from that state of normal, the more difficult and time consuming it will be to bring our network back to a trusted operating environment in the event of an incident or compromise.

The post Success in security: reining in entropy appeared first on Microsoft Security.

Operational resilience in a remote work world

May 18th, 2020 No comments

Microsoft CEO Satya Nadella recently said, “We have seen two years’ worth of digital transformation in two months.” This is a result of many organizations having to adapt to the new world of document sharing and video conferencing as they become distributed organizations overnight.

At Microsoft, we understand that while the current health crisis we face together has served as this forcing function, some organizations might not have been ready for this new world of remote work, financially or organizationally. Just last summer, a simple lightning strike caused the U.K.’s National Grid to suffer the biggest blackout in decades. It affected homes across the country, shut down traffic signals, and closed some of the busiest train stations in the middle of the Friday evening rush hour. Trains needed to be manually rebooted causing delays and disruptions. And, when malware shut down the cranes and security gates at Maersk shipping terminals, as well as most of the company’s IT network—from the booking site to systems handling cargo manifests, it took two months to rebuild all the software systems, and three months before all cargo in transit was tracked down—with recovery dependent on a single server having been accidentally offline during the attack due to the power being cut off.

Cybersecurity provides the underpinning to operationally resiliency as more organizations adapt to enabling secure remote work options, whether in the short or long term. And, whether natural or manmade, the difference between success or struggle to any type of disruption requires a strategic combination of planning, response, and recovery. To maintain cyber resilience, one should be regularly evaluating their risk threshold and an organization’s ability to operationally execute the processes through a combination of human efforts and technology products and services.

While my advice is often a three-pronged approach of turning on multi-factor authentication (MFA)—100 percent of your employees, 100 percent of the time—using Secure Score to increase an organization’s security posture and having a mature patching program that includes containment and isolation of devices that cannot be patched, we must also understand that not every organization’s cybersecurity team may be as mature as another.

Organizations must now be able to provide their people with the right resources so they are able to securely access data, from anywhere, 100 percent of the time. Every person with corporate network access, including full-time employees, consultants, and contractors, should be regularly trained to develop a cyber-resilient mindset. They shouldn’t just adhere to a set of IT security policies around identity-based access control, but they should also be alerting IT to suspicious events and infections as soon as possible to help minimize time to remediation.

Our new normal means that risks are no longer limited to commonly recognized sources such as cybercriminals, malware, or even targeted attacks. Moving to secure remote work environment, without a resilience plan in place that does not include cyber resilience increases an organization’s risk.

Before COVID, we knew that while a majority of firms have a disaster recovery plan on paper, nearly a quarter never test that, and only 42 percent of global executives are confident their organization could recover from a major cyber event without it affecting their business.

Operational resilience cannot be achieved without a true commitment to, and investment in, cyber resilience. We want to help empower every organization on the planet by continuing to share our learnings to help you reach the state where core operations and services won’t be disrupted by geopolitical or socioeconomic events, natural disasters, or even cyber events.

Learn more about our guidance related to COVID-19 here, and bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Operational resilience in a remote work world appeared first on Microsoft Security.

Open-sourcing new COVID-19 threat intelligence

May 14th, 2020 No comments

A global threat requires a global response. While the world faces the common threat of COVID-19, defenders are working overtime to protect users all over the globe from cybercriminals using COVID-19 as a lure to mount attacks. As a security intelligence community, we are stronger when we share information that offers a more complete view of attackers’ shifting techniques. This more complete view enables us all to be more proactive in protecting, detecting, and defending against attacks.

At Microsoft, our security products provide built-in protections against these and other threats, and we’ve published detailed guidance to help organizations combat current threats (Responding to COVID-19 together). Our threat experts are sharing examples of malicious lures and we have enabled guided hunting of COVID-themed threats using Azure Sentinel Notebooks. Microsoft processes trillions of signals each day across identities, endpoint, cloud, applications, and email, which provides visibility into a broad range of COVID-19-themed attacks, allowing us to detect, protect, and respond to them across our entire security stack. Today, we take our COVID-19 threat intelligence sharing a step further by making some of our own indicators available publicly for those that are not already protected by our solutions. Microsoft Threat Protection (MTP) customers are already protected against the threats identified by these indicators across endpoints with Microsoft Defender Advanced Threat Protection (ATP) and email with Office 365 ATP.

In addition, we are publishing these indicators for those not protected by Microsoft Threat Protection to raise awareness of attackers’ shift in techniques, how to spot them, and how to enable your own custom hunting. These indicators are now available in two ways. They are available in the Azure Sentinel GitHub and through the Microsoft Graph Security API. For enterprise customers who use MISP for storing and sharing threat intelligence, these indicators can easily be consumed via a MISP feed.

This threat intelligence is provided for use by the wider security community, as well as customers who would like to perform additional hunting, as we all defend against malicious actors seeking to exploit the COVID crisis.

This COVID-specific threat intelligence feed represents a start at sharing some of Microsoft’s COVID-related IOCs. We will continue to explore ways to improve the data over the duration of the crisis. While some threats and actors are still best defended more discreetly, we are committed to greater transparency and taking community feedback on what types of information is most useful to defenders in protecting against COVID-related threats. This is a time-limited feed. We are maintaining this feed through the peak of the outbreak to help organizations focus on recovery.

Protection in Azure Sentinel and Microsoft Threat Protection

Today’s release includes file hash indicators related to email-based attachments identified as malicious and attempting to trick users with COVID-19 or Coronavirus-themed lures. The guidance below provides instructions on how to access and integrate this feed in your own environment.

For Azure Sentinel customers, these indicators can be either be imported directly into Azure Sentinel using a Playbook or accessed directly from queries.

The Azure Sentinel Playbook that Microsoft has authored will continuously monitor and import these indicators directly into your Azure Sentinel ThreatIntelligenceIndicator table. This Playbook will match with your event data and generate security incidents when the built-in threat intelligence analytic templates detect activity associated to these indicators.

These indicators can also be accessed directly from Azure Sentinel queries as follows:

let covidIndicators = (externaldata(TimeGenerated:datetime, FileHashValue:string, FileHashType: string )
[@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/Microsoft.Covid19.Indicators.csv"]
with (format="csv"));
covidIndicators

Azure Sentinel logs.

A sample detection query is also provided in the Azure Sentinel GitHub. With the table definition above, it is as simple as:

  1. Join the indicators against the logs ingested into Azure Sentinel as follows:
covidIndicators
| join ( CommonSecurityLog | where TimeGenerated >= ago(7d)
| where isnotempty(FileHashValue)
) on $left.FileHashValue == $right.FileHash
  1. Then, select “New alert rule” to configure Azure Sentinel to raise incidents based on this query returning results.

CyberSecurityDemo in Azure Sentinel logs.

You should begin to see Alerts in Azure Sentinel for any detections related to these COVID threat indicators.

Microsoft Threat Protection provides protection for the threats associated with these indicators. Attacks with these Covid-19-themed indicators are blocked by Office 365 ATP and Microsoft Defender ATP.

While MTP customers are already protected, they can also make use of these indicators for additional hunting scenarios using the MTP Advanced Hunting capabilities.

Here is a hunting query to see if any process created a file matching a hash on the list.

let covidIndicators = (externaldata(TimeGenerated:datetime, FileHashValue:string, FileHashType: string )
[@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/Microsoft.Covid19.Indicators.csv"]
with (format="csv"))
| where FileHashType == 'sha256' and TimeGenerated > ago(1d);
covidIndicators
| join (DeviceFileEvents
| where Timestamp > ago(1d)
| where ActionType == 'FileCreated'
| take 100) on $left.FileHashValue  == $right.SHA256

Advanced hunting in Microsoft Defender Security Center.

This is an Advanced Hunting query in MTP that searches for any recipient of an attachment on the indicator list and sees if any recent anomalous log-ons happened on their machine. While COVID threats are blocked by MTP, users targeted by these threats may be at risk for non-COVID related attacks and MTP is able to join data across device and email to investigate them.

let covidIndicators = (externaldata(TimeGenerated:datetime, FileHashValue:string, FileHashType: string )    [@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/Microsoft.Covid19.Indicators.csv"] with (format="csv"))
| where FileHashType == 'sha256' and TimeGenerated > ago(1d);
covidIndicators
| join (  EmailAttachmentInfo  | where Timestamp > ago(1d)
| project NetworkMessageId , SHA256
) on $left.FileHashValue  == $right.SHA256
| join (
EmailEvents
| where Timestamp > ago (1d)
) on NetworkMessageId
| project TimeEmail = Timestamp, Subject, SenderFromAddress, AccountName = tostring(split(RecipientEmailAddress, "@")[0])
| join (
DeviceLogonEvents
| project LogonTime = Timestamp, AccountName, DeviceName
) on AccountName
| where (LogonTime - TimeEmail) between (0min.. 90min)
| take 10

Advanced hunting in Microsoft 365 security.

Connecting an MISP instance to Azure Sentinel

The indicators published on the Azure Sentinel GitHub page can be consumed directly via MISP’s feed functionality. We have published details on doing this at this URL: https://aka.ms/msft-covid19-misp. Please refer to the Azure Sentinel documentation on connecting data from threat intelligence providers.

Using the indicators if you are not an Azure Sentinel or MTP customer

Yes, the Azure Sentinel GitHub is public: https://aka.ms/msft-covid19-Indicators

Examples of phishing campaigns in this threat intelligence

The following is a small sample set of the types of COVID-themed phishing lures using email attachments that will be represented in this feed. Beneath each screenshot are the relevant hashes and metadata.

Figure 1: Spoofing WHO branding with “cure” and “vaccine” messaging with a malicious .gz file.

Name: CURE FOR CORONAVIRUS_pdf.gz

World Health Organization phishing email.

Figure 2: Spoofing Red Cross Safety Tips with malicious .docm file.

Name: COVID-19 SAFETY TIPS.docm

Red Cross phishing email.

Figure 3: South African banking lure promoting COVID-19 financial relief with malicious .html files.

Name: SBSA-COVID-19-Financial Relief.html

Financial relief phishing email.

Figure 4: French language spoofed correspondence from the WHO with malicious XLS Macro file.

Name:✉-Covid-19 Relief Plan5558-23636sd.htm

Coronavirus-themed phishing email.

If you have questions or feedback on this COVID-19 feed, please email msft-covid19-ti@microsoft.com.

The post Open-sourcing new COVID-19 threat intelligence appeared first on Microsoft Security.

CISO stress-busters: post #1 overcoming obstacles

May 11th, 2020 No comments

As part of the launch of the U.S. space program’s moon shot, President Kennedy famously said we do these things “not because they are easy, but because they are hard.” The same can be said for the people responsible for security at their organizations; it is not a job one takes because it is easy. But it is critically important to keep our digital lives and work safe. And for the CISOs and leaders of the world, it is a job that is more than worth the hardships.

Recent research from Nominet paints a concerning picture of a few of those hardships. Forty-eight percent of CISO respondents indicated work stress had negatively impacted their mental health, this is almost double the number from last year’s survey. Thirty-one percent reported job stress had negatively impacted their physical health and 40 percent have seen their job stress impacting their personal lives. Add a fairly rapid churn rate (26 months on average) to all that stress and it’s clear CISOs are managing a tremendous amount of stress every day. And when crises hit, from incident response after a breach to a suddenly remote workforce after COVID-19, that stress only shoots higher.

Which is why we’re starting this new blog series called “CISO stress-busters.” In the words of CISOs from around the globe, we’ll be sharing insights, guidance, and support from peers on the front lines of the cyber workforce. Kicking us off—the main challenges that CISOs face and how they turn those obstacles into opportunity. The goal of the series is to be a bit of chicken (or chik’n for those vegans out there) soup for the CISO’s soul.

Today’s post features wisdom from three CISOs/Security Leaders:

  • TM Ching, Security CTO at DXC Technology
  • Jim Eckart, (former) CISO at Coca-Cola
  • Jason Golden, CISO at Mainstay Technologies

Clarifying contribution

Ask five different CEOs what their CISOs do and after the high level “manage security” answer you’ll probably get five very different explanations. This is partly because CISO responsibility can vary widely from company to company. So, it’s no surprise that many of the CISOs we interviewed touched on this point.

TM Ching summed it up this way, “Demonstrating my role to the organization can be a challenge—a role like mine may be perceived as symbolic” or that security is just here to “slow things down.” For Jason, “making sure that business leaders understand the difference between IT Operations, Cybersecurity, and InfoSec” can be difficult because execs “often think all of those disciplines are the same thing” and that since IT Ops has the products and solutions, they own security. Jim also bumped up against confusion about the security role with multiple stakeholders pushing and pulling in different directions like “a CIO who says ‘here is your budget,’ a CFO who says ‘why are you so expensive?’ and a general counsel who says ‘we could be leaking information everywhere.’”

What works:

  • Educate Execs—about the role of a CISO. Helping them “understand that it takes a program, that it’s a discipline.” One inflection point is after a breach, “you may be sitting there with an executive, the insurance company, their attorneys, maybe a forensics company and it always looks the same. The executive is looking down the table at the wide-eyed IT person saying ‘What happened?’” It’s a opportunity to educate, to help “make sure the execs understand the purpose of risk management.”—Jason Golden.   To see how to do this watch Microsoft CISO Series Episode 2 Part 1:  Security is everyone’s Business
  • Show Don’t Tell—“It is important to constantly demonstrate that I am here to help them succeed, and not to impose onerous compliance requirements that stall their projects.”—TM Ching
  • Accountability Awareness—CISOs do a lot, but one thing they shouldn’t do is to make risk decisions for the business in a vacuum. That’s why it’s critical to align “all stakeholders (IT, privacy, legal, financial, security, etc.) around the fact that cybersecurity and compliance are business risk issues and not IT issues. IT motions are (and should be) purely in response to the business’ decision around risk tolerance.”—Jim Eckart

Exerting influence

Fans of Boehm’s curve know that the earlier security can be introduced into a process, the less expensive it is to fix defects and flaws. But it’s not always easy for CISOs to get security a seat at the table whether it’s early in the ideation process for a new customer facing application or during financial negotiations to move critical workloads to the cloud. As TM put it, “Exerting influence to ensure that projects are secured at Day 0. This is possibly the hardest thing to do.” And because “some business owners do not take negative news very well” telling them their new app baby is “security ugly” the day before launch can be a gruesome task. And as Jason pointed out, “it’s one thing to talk hypothetically about things like configuration management and change management and here are the things that you need to do to meet those controls so you can keep your contract. It’s a different thing to get that embedded in operations so that IT and HR all the way through finance are following the rules for change management and configuration management.”

What Works:

  • Negotiate engagement—To avoid the last minute “gotchas” or bolting on security after a project has deployed, get into the conversation as early as possible. This isn’t easy, but as TM explains, it can be done. “It takes a lot of negotiations to convince stakeholders why it will be beneficial for them in the long run to take a pause and put the security controls in place, before continuing with their projects.”
  • Follow frameworks—Well-known frameworks like the NIST Cybersecurity Framework, NIST SP800-53, and SP800-37 can help CISOs “take things from strategy to operations” by providing baselines and best practices for building security into the entire organization and systems lifecycle. And that will pay off in the long run; “when the auditors come calling, they’re looking for evidence that you’re following your security model and embedding that throughout the organization.” —Jason

Cultivating culture

Wouldn’t it be wonderful if every company had a security mindset and understood the benefits of having a mature, well-funded security and risk management program? If every employee understood what a phish looks like and why they should report it? Unfortunately, most companies aren’t laser focused on security, leaving that education work up to the CISO and their team. And having those conversations with stakeholders that sometimes have conflicting agendas requires technical depth and robust communication skills. That’s not easy. As Jim points out, “it’s a daunting scope of topics to be proficient in at all levels.

What works:

  • Human firewalls—All the tech controls in the world won’t stop 100 percent of attacks, people need to be part of the solution too. “We can address administrative controls, technical controls, physical controls, but you also need to address the culture and human behavior, or the human firewalls. You know you’re only going to be marginally successful if you don’t engage employees too.” —Jason
  • Know your audience—CISOs need to cultivate “depth and breadth. On any given day, I needed to move from board-level conversations (where participants barely understand security) all the way to the depths of zero day vulnerabilities, patching, security architecture.” —Jim

Did you find these insights helpful? What would you tell your fellow CISOs about overcoming obstacles? What works for you? Please reach out to me on LinkedIn and let me know what you thought of this article and if you’re interested in being interviewed for one of our upcoming posts.

The post CISO stress-busters: post #1 overcoming obstacles appeared first on Microsoft Security.

How to gain 24/7 detection and response coverage with Microsoft Defender ATP

May 6th, 2020 No comments

This blog post is part of the Microsoft Intelligence Security Association guest blog series. To learn more about MISA, go here.

Whether you’re a security team of one or a dozen, detecting and stopping threats around the clock is a challenge. Security incidents don’t happen exclusively during business hours: attackers often wait until the late hours of the night to breach an environment.

At Red Canary, we work with security teams of all shapes and sizes to improve detection and response capabilities. Our Security Operations Team investigates threats in customer environments 24/7/365, removes false positives, and delivers confirmed threats with context. We’ve seen teams run into a wide range of issues when trying to establish after-hours coverage on their own, including:

  • For global enterprises, around-the-clock monitoring can significantly increase the pressure on a U.S.–based security team. If you have personnel around the world, a security team in a single time zone isn’t sufficient to cover the times that computing assets are used in those environments.
  • In smaller companies that don’t have global operations, the security team is more likely to be understaffed and unable to handle 24/7 security monitoring without stressful on-call schedules.
  • For the security teams of one, being “out of office” is a foreign concept. You’re always on. And you need to set up some way to monitor the enterprise while you’re away.

Microsoft Defender Advanced Threat Protection (ATP) is an industry leading endpoint security solution that’s built into Windows with extended capabilities to Mac and Linux servers. Red Canary unlocks the telemetry delivered from Microsoft Defender ATP and investigates every alert, enabling you to immediately increase your detection coverage and waste no time with false positives.

Here’s how those who haven’t started with Red Canary yet can answer the question, “How can I support my 24/7 security needs with Microsoft Defender ATP?”

No matter how big your security team is, the most important first step is notifying the right people based on an on-call schedule. In this post, we’ll describe two different ways of getting Microsoft Defender ATP alerts to your team 24×7 and how Red Canary has implemented this for our customers.

Basic 24/7 via email

Microsoft Defender Security Center allows you to send all Microsoft Defender ATP alerts to an email address. You can set up email alerts under Settings → Alert notifications.

MISA1

Email notification settings in Microsoft Defender Security Center.

These emails will be sent to your team and should be monitored for high severity situations after-hours.

If sent to a ticketing system, these emails can trigger tickets or after-hours pages to be created for your security team. We recommend limiting the alerts to medium and high severity so that you won’t be bothered for informational or low alerts.

MISA2

Setting up alert emails in Microsoft Defender ATP to be sent to a ticketing system.

Now any future alerts will create a new ticket in your ticketing system where you can assign security team members to on-call rotations and notify on-call personnel of new alerts (if supported). Once the notification is received by on-call personnel, they would then log into Microsoft Defender’s Security Center for further investigation and triage. 

Enhanced 24/7 via APIs

What if you want to ingest alerts to a system that doesn’t use email? You can do this by using the Microsoft Defender ATP APIs. First, you’ll need to have an authentication token. You can get the token like we do here:

MISA3

API call to retrieve authentication token.

Once you’ve stored the authentication token you can use it to poll the Microsoft Defender ATP API and retrieve alerts from Microsoft Defender ATP. Here’s an example of the code to pull new alerts.

MISA4

API call to retrieve alerts from Microsoft Defender ATP.

The API only returns a subset of the data associated with each alert. Here’s an example of what you might receive.

MISA5

Example of a Microsoft Defender ATP alert returned from the API.

You can then take this data and ingest it into any of your internal tools. You can learn more about how to access Microsoft Defender ATP APIs in the documentation. Please note, the limited information included in an alert email or API response is not enough to triage the behavior. You will still need to log into the Microsoft Defender Security Center to find out what happened and take appropriate action.

24/7 with Red Canary

By enabling Red Canary, you supercharge your Microsoft Defender ATP deployment by adding a proven 24×7 security operations team who are masters at finding and stopping threats, and an automation platform to quickly remediate and get back to business.

Red Canary continuously ingests all of the raw telemetry generated from your instance of Microsoft Defender ATP as the foundation for our service. We also ingest and monitor Microsoft Defender ATP alerts. We then apply thousands of our own proprietary analytics to identify potential threats that are sent 24/7 to a Red Canary detection engineer for review.

Here’s an overview of the process (to go behind the scenes of these operations check out our detection engineering blog series):

MISA6

Managed detection and response with Red Canary.

Red Canary is monitoring your Microsoft Defender ATP telemetry and alerts. If anything is a confirmed threat, our team creates a detection and sends it to you using a built-in automation framework that supports email, SMS, phone, Microsoft Teams/Slack, and more. Below is an example of what one of those detections might look like.

MISA7

Red Canary confirms threats and prioritizes them so you know what to focus on.

At the top of the detection timeline you’ll receive a short description of what happened. The threat has already been examined by a team of detection engineers from Red Canary’s Cyber Incident Response Team (CIRT), so you don’t have to worry about triage or investigation. As you scroll down, you can quickly see the results of the investigation that Red Canary’s senior detection engineers have done on your behalf, including detailed notes that provide context to what’s happening in your environment:

MISA8

Notes from Red Canary senior detection engineers (in light blue) provide valuable context.

You’re only notified of true threats and not false positives. This means you can focus on responding rather than digging through data to figure out what happened.

What if you don’t want to be woken up, you’re truly unavailable, or you just want bad stuff immediately dealt with? Use Red Canary’s automation to handle remediation on the fly. You and your team can create playbooks in your Red Canary portal to respond to threats immediately, even if you’re unavailable.

MISA9

Red Canary automation playbook.

This playbook allows you to isolate the endpoint (using the Machine Action resource type in the Microsoft Defender ATP APIs) if Red Canary identifies suspicious activity. You also have the option to set up Automate playbooks that depend on an hourly schedule. For example, you may want to approve endpoint isolation during normal work hours, but use automatic isolation overnight:

MISA10

Red Canary Automate playbook to automatically remediate a detection.

Getting started with Red Canary

Whether you’ve been using Microsoft Defender ATP since it’s preview releases or if you’re just getting started, Red Canary is the fastest way to accelerate your security operations program. Immediate onboarding, increased detection coverage, and a 24/7 CIRT team are all at your fingertips.

Terence Jackson, CISO at Thycotic and Microsoft Defender ATP user, describes what it’s like working with Red Canary:

“I have a small team that has to protect a pretty large footprint. I know the importance of detecting, preventing, and stopping problems at the entry point, which is typically the endpoint. We have our corporate users but then we also have SaaS customers we have to protect. Currently my team tackles both, so for me it’s simply having a trusted partner that can take the day-to-day hunting/triage/elimination of false positives and only provide actionable alerts/intel, which frees my team up to do other critical stuff.”

Red Canary is the fastest way to enhance your detection coverage from Microsoft Defender ATP so you know exactly when and where to respond.

Contact us to see a demo and learn more.

The post How to gain 24/7 detection and response coverage with Microsoft Defender ATP appeared first on Microsoft Security.

Lessons learned from the Microsoft SOC—Part 3c: A day in the life part 2

May 5th, 2020 No comments

This is the sixth blog in the Lessons learned from the Microsoft SOC series designed to share our approach and experience from the front lines of our security operations center (SOC) protecting Microsoft and our Detection and Response Team (DART) helping our customers with their incidents. For a visual depiction of our SOC philosophy, download our Minutes Matter poster.

COVID-19 and the SOC

Before we conclude the day in the life, we thought we would share an analyst’s eye view of the impact of COVID-19. Our analysts are mostly working from home now and our cloud based tooling approach enabled this transition to go pretty smoothly. The differences in attacks we have seen are mostly in the early stages of an attack with phishing lures designed to exploit emotions related to the current pandemic and increased focus on home firewalls and routers (using techniques like RDP brute-forcing attempts and DNS poisoning—more here). The attack techniques they attempt to employ after that are fairly consistent with what they were doing before.

A day in the life—remediation

When we last left our heroes in the previous entry, our analyst had built a timeline of the potential adversary attack operation. Of course, knowing what happened doesn’t actually stop the adversary or reduce organizational risk, so let’s remediate this attack!

  1. Decide and act—As the analyst develops a high enough level of confidence that they understand the story and scope of the attack, they quickly shift to planning and executing cleanup actions. While this appears as a separate step in this particular description, our analysts often execute on cleanup operations as they find them.

Big Bang or clean as you go?

Depending on the nature and scope of the attack, analysts may clean up attacker artifacts as they go (emails, hosts, identities) or they may build a list of compromised resources to clean up all at once (Big Bang)

  • Clean as you go—For most typical incidents that are detected early in the attack operation, analysts quickly clean up the artifacts as we find them. This rapidly puts the adversary at a disadvantage and prevents them from moving forward with the next stage of their attack.
  • Prepare for a Big Bang—This approach is appropriate for a scenario where an adversary has already “settled in” and established redundant access mechanisms to the environment (frequently seen in incidents investigated by our Detection and Response Team (DART) at customers). In this case, analysts should avoid tipping off the adversary until full discovery of all attacker presence is discovered as surprise can help with fully disrupting their operation. We have learned that partial remediation often tips off an adversary, which gives them a chance to react and rapidly make the incident worse (spread further, change access methods to evade detection, inflict damage/destruction for revenge, cover their tracks, etc.).Note that cleaning up phishing and malicious emails can often be done without tipping off the adversary, but cleaning up host malware and reclaiming control of accounts has a high chance of tipping off the adversary.

These are not easy decisions to make and we have found no substitute for experience in making these judgement calls. The collaborative work environment and culture we have built in our SOC helps immensely as our analysts can tap into each other’s experience to help making these tough calls.

The specific response steps are very dependent on the nature of the attack, but the most common procedures used by our analysts include:

  • Client endpoints—SOC analysts can isolate a computer and contact the user directly (or IT operations/helpdesk) to have them initiate a reinstallation procedure.
  • Server or applications—SOC analysts typically work with IT operations and/or application owners to arrange rapid remediation of these resources.
  • User accounts—We typically reclaim control of these by disabling the account and resetting password for compromised accounts (though these procedures are evolving as a large amount of our users are mostly passwordless using Windows Hello or another form of MFA). Our analysts also explicitly expire all authentication tokens for the user with Microsoft Cloud App Security.
    Analysts also review the multi-factor phone number and device enrollment to ensure it hasn’t been hijacked (often contacting the user), and reset this information as needed.
  • Service Accounts—Because of the high risk of service/business impact, SOC analysts work with the service account owner of record (falling back on IT operations as needed) to arrange rapid remediation of these resources.
  • Emails—The attack/phishing emails are deleted (and sometimes cleared to prevent recovering of deleted emails), but we always save a copy of original email in the case notes for later search and analysis (headers, content, scripts/attachments, etc.).
  • Other—Custom actions can also be executed based on the nature of the attack such as revoking application tokens, reconfiguring servers and services, and more.

Automation and integration for the win

It’s hard to overstate the value of integrated tools and process automation as these bring so many benefits—improving the analysts daily experience and improving the SOC’s ability to reduce organizational risk.

  • Analysts spend less time on each incident, reducing the attacker’s time to operation—measured by mean time to remediate (MTTR).
  • Analysts aren’t bogged down in manual administrative tasks so they can react quickly to new detections (reducing mean time to acknowledge—MTTA).
  • Analysts have more time to engage in proactive activities that both reduce organization risk and increase morale by keeping them focused on the mission.

Our SOC has a long history of developing our own automation and scripts to make analysts lives easier by a dedicated automation team in our SOC. Because custom automation requires ongoing maintenance and support, we are constantly looking for ways to shift automation and integration to capabilities provided by Microsoft engineering teams (which also benefits our customers). While still early in this journey, this approach typically improves the analyst experience and reduces maintenance effort and challenges.

This is a complex topic that could fill many blogs, but this takes two main forms:

  • Integrated toolsets save analysts manual effort during incidents by allowing them to easily navigate multiple tools and datasets. Our SOC relies heavily on the integration of Microsoft Threat Protection (MTP) tools for this experience, which also saves the automation team from writing and supporting custom integration for this.
  • Automation and orchestration capabilities reduce manual analyst work by automating repetitive tasks and orchestrating actions between different tools. Our SOC currently relies on an advanced custom SOAR platform and is actively working with our engineering teams (MTP’s AutoIR capability and Azure Sentinel SOAR) on how to shift our learnings and workload onto those capabilities.

After the attacker operation has been fully disrupted, the analyst marks the case as remediated, which is the timestamp signaling the end of MTTR measurement (which started when the analyst began the active investigation in step 2 of the previous blog).

While having a security incident is bad, having the same incident repeated multiple times is much worse.

  1. Post-incident cleanup—Because lessons aren’t actually “learned” unless they change future actions, our analysts always integrate any useful information learned from the investigation back into our systems. Analysts capture these learnings so that we avoid repeating manual work in the future and can rapidly see connections between past and future incidents by the same threat actors. This can take a number of forms, but common procedures include:
    • Indicators of Compromise (IoCs)—Our analysts record any applicable IoCs such as file hashes, malicious IP addresses, and email attributes into our threat intelligence systems so that our SOC (and all customers) can benefit from these learnings.
    • Unknown or unpatched vulnerabilities—Our analysts can initiate processes to ensure that missing security patches are applied, misconfigurations are corrected, and vendors (including Microsoft) are informed of “zero day” vulnerabilities so that they can create security patches for them.
    • Internal actions such as enabling logging on assets and adding or changing security controls. 

Continuous improvement

So the adversary has now been kicked out of the environment and their current operation poses no further risk. Is this the end? Will they retire and open a cupcake bakery or auto repair shop? Not likely after just one failure, but we can consistently disrupt their successes by increasing the cost of attack and reducing the return, which will deter more and more attacks over time. For now, we must assume that adversaries will try to learn from what happened on this attack and try again with fresh ideas and tools.

Because of this, our analysts also focus on learning from each incident to improve their skills, processes, and tooling. This continuous improvement occurs through many informal and formal processes ranging from formal case reviews to casual conversations where they tell the stories of incidents and interesting observations.

As caseload allows, the investigation team also hunts proactively for adversaries when they are not on shift, which helps them stay sharp and grow their skills.

This closes our virtual shift visit for the investigation team. Join us next time as we shift to our Threat hunting team (a.k.a. Tier 3) and get some hard won advice and lessons learned.

…until then, share and enjoy!

P.S. If you are looking for more information on the SOC and other cybersecurity topics, check out previous entries in the series (Part 1 | Part 2a | Part 2b | Part 3a | Part 3b), Mark’s List (https://aka.ms/markslist), and our new security documentation site—https://aka.ms/securtydocs. Be sure to bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. Or reach out to Mark on LinkedIn or Twitter.

The post Lessons learned from the Microsoft SOC—Part 3c: A day in the life part 2 appeared first on Microsoft Security.

Mitigating vulnerabilities in endpoint network stacks

May 4th, 2020 No comments

The skyrocketing demand for tools that enable real-time collaboration, remote desktops for accessing company information, and other services that enable remote work underlines the tremendous importance of building and shipping secure products and services. While this is magnified as organizations are forced to adapt to the new environment created by the global crisis, it’s not a new imperative. Microsoft has been investing heavily in security, and over the years our commitment to building proactive security into products and services has only intensified.

To help deliver on this commitment, we continuously find ways to improve and secure Microsoft products. One aspect of our proactive security work is finding vulnerabilities and fixing them before they can be exploited. Our strategy is to take a holistic approach and drive security throughout the engineering lifecycle. We do this by:

  • Building security early into the design of features.
  • Developing tools and processes that proactively find vulnerabilities in code.
  • Introducing mitigations into Windows that make bugs significantly harder to exploit.
  • Having our world-class penetration testing team test the security boundaries of the product so we can fix issues before they can impact customers.

This proactive work ensures we are continuously making Windows safer and finding as many issues as possible before attackers can take advantage of them. In this blog post we will discuss a recent vulnerability that we proactively found and fixed and provide details on tools and techniques we used, including a new set of tools that we built internally at Microsoft. Our penetration testing team is constantly testing the security boundaries of the product to make it more secure, and we are always developing tools that help them scale and be more effective based on the evolving threat landscape. Our investment in fuzzing is the cornerstone of our work, and we are constantly innovating this tech to keep on breaking new ground.

Proactive security to prevent the next WannaCry

In the past few years, much of our team’s efforts have been focused on uncovering remote network vulnerabilities and preventing events like the WannaCry and NotPetya outbreaks. Some bugs we have recently found and fixed include critical vulnerabilities that could be leveraged to exploit common secure remote communication tools like RDP or create ransomware issues like WannaCry: CVE-2019-1181 and CVE-2019-1182 dubbed “DejaBlue“, CVE-2019-1226 (RCE in RDP Server), CVE-2020-0611 (RCE in RDP Client), and CVE-2019-0787 (RCE in RDP client), among others.

One of the biggest challenges we regularly face in these efforts is the sheer volume of code we analyze. Windows is enormous and continuously evolving 5.7 million source code files, with more than 3,500 developers doing 1,100 pull requests per day in 440 official branches. This rapid cadence and evolution allows us to add new features as well proactively drive security into Windows.

Like many security teams, we frequently turn to fuzzing to help us quickly explore and assess large codebases. Innovations we’ve made in our fuzzing technology have made it possible to get deeper coverage than ever before, resulting in the discovery of new bugs, faster. One such vulnerability is the remote code vulnerability (RCE) in Microsoft Server Message Block version 3 (SMBv3) tracked as CVE-2020-0796 and fixed on March 12, 2020.

In the following sections, we will share the tools and techniques we used to fuzz SMB, the root cause of the RCE vulnerability, and relevant mitigations to exploitation.

Fully deterministic person-in-the-middle fuzzing

We use a custom deterministic full system emulator tool we call “TKO” to fuzz and introspect Windows components.  TKO provides the capability to perform full system emulation and memory snapshottting, as well as other innovations.  As a result of its unique design, TKO provides several unique benefits to SMB network fuzzing:

  • The ability to snapshot and fuzz forward from any program state.
  • Efficiently restoring to the initial state for fast iteration.
  • Collecting complete code coverage across all processes.
  • Leveraging greater introspection into the system without too much perturbation.

While all of these actions are possible using other tools, our ability to seamlessly leverage them across both user and kernel mode drastically reduces the spin-up time for targets. To learn more, check out David Weston’s recent BlueHat IL presentation “Keeping Windows secure”, which touches on fuzzing, as well as the TKO tool and infrastructure.

Fuzzing SMB

Given the ubiquity of SMB and the impact demonstrated by SMB bugs in the past, assessing this network transfer protocol has been a priority for our team. While there have been past audits and fuzzers thrown against the SMB codebase, some of which postdate the current SMB version, TKO’s new capabilities and functionalities made it worthwhile to revisit the codebase. Additionally, even though the SMB version number has remained static, the code has not! These factors played into our decision to assess the SMB client/server stack.

After performing an initial audit pass of the code to understand its structure and dataflow, as well as to get a grasp of the size of the protocol’s state space, we had the information we needed to start fuzzing.

We used TKO to set up a fully deterministic feedback-based fuzzer with a combination of generated and mutated SMB protocol traffic. Our goal for generating or mutating across multiple packets was to dig deeper into the protocol’s state machine. Normally this would introduce difficulties in reproducing any issues found; however, our use of emulators made this a non-issue. New generated or mutated inputs that triggered new coverage were saved to the input corpus. Our team had a number of basic mutator libraries for different scenarios, but we needed to implement a generator. Additionally, we enabled some of the traditional Windows heap instrumentation using verifier, turning on page heap for SMB-related drivers.

We began work on the SMBv2 protocol generator and took a network capture of an SMB negotiation with the aim of replaying these packets with mutations against a Windows 10, version 1903 client. We added a mutator with basic mutations (e.g., bit flips, insertions, deletions, etc.) to our fuzzer and kicked off an initial run while we continued to improve and develop further.

Figure 1. TKO fuzzing workflow

A short time later, we came back to some compelling results. Replaying the first crashing input with TKO’s kdnet plugin revealed the following stack trace:

> tkofuzz.exe repro inputs\crash_6a492.txt -- kdnet:conn 127.0.0.1:50002

Figure 2. Windbg stack trace of crash

We found an access violation in srv2!Smb2CompressionDecompress.

Finding the root cause of the crash

While the stack trace suggested that a vulnerability exists in the decompression routine, it’s the parsing of length counters and offsets from the network that causes the crash. The last packet in the transaction needed to trigger the crash has ‘\xfcSMB’ set as the first bytes in its header, making it a COMPRESSION_TRANSFORM packet.

Figure 3. COMPRESSION_TRANSFORM packet details

The SMBv2 COMPRESSION_TRANSFORM packet starts with a COMPRESSION_TRANSFORM_HEADER, which defines where in the packet the compressed bytes begin and the length of the compressed buffer.

typedef struct _COMPRESSION_TRANSFORM_HEADER

{

UCHAR   Protocol[4]; // Contains 0xFC, 'S', 'M', 'B'

ULONG    OriginalMessageSize;

USHORT AlgorithmId;

USHORT Flags;

ULONG Length;

}

In the srv2!Srv2DecompressData in the graph below, we can find this COMPRESSION_TRANSFORM_HEADER struct being parsed out of the network packet and used to determine pointers being passed to srv2!SMBCompressionDecompress.

Figure 4. Srv2DecompressData graph

We can see that at 0x7e94, rax points to our network buffer, and the buffer is copied to the stack before the OriginalCompressedSegmentSize and Length are parsed out and added together at 0x7ED7 to determine the size of the resulting decompressed bytes buffer. Overflowing this value causes the decompression to write its results out of the bounds of the destination SrvNet buffer, in an out-of-bounds write (OOBW).

Figure 5. Overflow condition

Looking further, we can see that the Length field is parsed into esi at 0x7F04, added to the network buffer pointer, and passed to CompressionDecompress as the source pointer. As Length is never checked against the actual number of received bytes, it can cause decompression to read off the end of the received network buffer. Setting this Length to be greater than the packet length also causes the computed source buffer length passed to SmbCompressionDecompress to underflow at 0x7F18, creating an out-of-bounds read (OOBR) vulnerability. Combining this OOBR vulnerability with the previous OOBW vulnerability creates the necessary conditions to leak addresses and create a complete remote code execution exploit.

Figure 6. Underflow condition

Windows 10 mitigations against remote network vulnerabilities

Our discovery of the SMBv3 vulnerability highlights the importance of revisiting protocol stacks regularly as our tools and techniques continue to improve over time. In addition to the proactive hunting for these types of issues, the investments we made in the last several years to harden Windows 10 through mitigations like address space layout randomization (ASLR), Control Flow Guard (CFG), InitAll, and hypervisor-enforced code integrity (HVCI) hinder trivial exploitation and buy defenders time to patch and protect their networks.

For example, turning vulnerabilities like the ones discovered in SMBv3 into working exploits requires finding writeable kernel pages at reliable addresses, a task that requires heap grooming and corruption, or a separate vulnerability in Windows kernel address space layout randomization (ASLR). Typical heap-based exploits taking advantage of a vulnerability like the one described here would also need to make use of other allocations, but Windows 10 pool hardening helps mitigate this technique. These mitigations work together and have a cumulative effect when combined, increasing the development time and cost of reliable exploitation.

Assuming attackers gain knowledge of our address space, indirect jumps are mitigated by kernel-mode CFG. This forces attackers to either use data-only corruption or bypass Control Flow Guard via stack corruption or yet another bug. If virtualization-based security (VBS) and HVCI are enabled, attackers are further constrained in their ability to map and modify memory permissions.

On Secured-core PCs these mitigations are enabled by default.  Secured-core PCs combine virtualization, operating system, and hardware and firmware protection. Along with Microsoft Defender Advanced Threat Protection, Secured-core PCs provide end-to-end protection against advanced threats.

While these mitigations collectively lower the chances of successful exploitation, we continue to deepen our investment in identifying and fixing vulnerabilities before they can get into the hands of adversaries.

 

The post Mitigating vulnerabilities in endpoint network stacks appeared first on Microsoft Security.

Microsoft Threat Protection leads in real-world detection in MITRE ATT&CK evaluation

May 1st, 2020 No comments

The latest round of MITRE ATT&CK evaluations proved yet again that Microsoft customers can trust they are fully protected even in the face of such an advanced attack as APT29. When looking at protection results out of the box, without configuration changes, Microsoft Threat Protection (MTP):

  • Provided nearly 100 percent coverage across the attack chain stages.
  • Delivered leading out-of-box visibility into attacker activities, dramatically reducing manual work for SOCs vs. vendor solutions that relied on specific configuration changes.
  • Had the fewest gaps in visibility, diminishing attacker ability to operate undetected.

Beyond just detection and visibility, automation, prioritization, and prevention are key to stopping this level of advanced attack. During testing, Microsoft:

  • Delivered automated real-time alerts without the need for configuration changes or custom detections; Microsoft is one of only three vendors who did not make configuration changes or rely on delayed detections.
  • Flagged more than 80 distinct alerts, and used built-in automation to correlate these alerts into only two incidents that mirrored the two MITRE ATT&CK simulations, improving SOC analyst efficiency and reducing attacker dwell time and ability to persist.
  • Identified seven distinct steps during the attack in which our protection features, which were disabled during testing, would have automatically intervened to stop the attack.

Microsoft Threat Experts provided further in-depth context and recommendations for further investigation through our comprehensive in-portal forensics. The evaluation also proved how Microsoft Threat Protection goes beyond just simple visibility into attacks, but also records all stages of the attack in which MTP would have stepped in to block the attack and automatically remediate any affected assets.

While the test focused on endpoint detection and response, MITRE’s simulated APT29 attack spans multiple attack domains, creating opportunities to empower defenders beyond just endpoint protection. Microsoft expanded defenders’ visibility beyond the endpoint with Microsoft Threat Protection (MTP). MTP has been recognized by both Gartner and Forrester as having extended detection and response capabilities. MTP takes protection to the next level by combining endpoint protection from Microsoft Defender ATP (EDR) with protection for email and productivity tools (Office 365 ATP), identity (Azure ATP), and cloud applications (Microsoft Cloud App Security [MCAS]). Below, we will share a deep-dive analysis and explanation of how MTP successfully demonstrated novel optic and detection advantages throughout the MITRE evaluation that only our solution can provide.

Incident-based approach enables real-time threat prioritization and remediation

Analyzing the MITRE evaluation results from the lens of breadth and coverage, as the diagrams below show, MTP provided exceptional coverage for all but one of the 19 tested attack stages. This means that in real life, the SOC would have received alerts and given full visibility into each of the stages of the two simulated attack scenarios across initial access, deployment of tools, discovery, persistence, credential access, lateral movement, and exfiltration. In Microsoft Threat Protection, alerts carry with them rich context—including a detailed process tree showing the recorded activities (telemetry) that led to the detection, the assets involved, all supporting evidence, as well as a description of what the alert means and recommendations for SOC action. Note that true alerts are attributed in the MITRE evaluation with the “Alert” modifier, and not all items marked as “Tactic” or “Technique” are actual alerts.

MTP detection coverage across the attack kill-chain stages, with block opportunities.

Figure 1: MTP detection coverage across the attack kill-chain stages, with block opportunities.

Figure 1: MTP detection coverage across the attack kill-chain stages, with block opportunities.

Note: Step 10, persistence execution, is registered as a miss due to a software bug, discovered during the test, that restricted visibility on Step 10—“Persistence Execution.” These evaluations are a valuable opportunity to continually improve our product, and this bug was fixed shortly after testing completed.

The MITRE APT29 evaluation focused solely on detection of an advanced attack; it did not measure whether or not participants were able to also prevent an attack. However, we believe that real-world protection is more than just knowing that an attack occurred—prevention of the attack is a critical element. While protections were intentionally turned off to allow the complete simulation to run, using the audit-only prevention configuration, MTP also captured and documented where the attack would have been completely prevented, including—as shown in the diagram above – the very start of the breach, if protections had been left on.

Microsoft Threat Protection also demonstrated how it promotes SOC efficiency and reduces attacker dwell time and sprawl. SOC alert fatigue is a serious problem; raising a large volume of alerts to investigate does not help SOC analysts understand where to devote their limited time and resources. Detection and response products must prioritize the most important attacker actions with the right context in near real time.

In contrast to alert-only approaches, MTP’s incident-based approach automatically identifies complex links between attacker activities in different domains including endpoint, identity, and cloud applications at an altitude that only Microsoft can provide because we have optics into each of these areas. In this scenario, MTP connected seemingly unrelated alerts using supporting telemetry across domains into just two end-to-end incidents, dramatically simplifying prioritization, triage, and investigation. In real life, this also simplifies automated response and enables SOC teams to scale capacity and capabilities. MITRE addresses a similar problem with the “correlated” modifier on telemetry and alerts but does not reference incidents (just yet).

Figure 2: MTP portal showing 2nd day attack incident including correlated alerts and affected assets.

Figure 2: MTP portal showing 2nd day attack incident including correlated alerts and affected assets.

Figure 3: 2nd day incident with all correlated alerts for SOC efficiency, and the attack incident graph.

Figure 3: 2nd day incident with all correlated alerts for SOC efficiency, and the attack incident graph.

Microsoft is the leader in out-of-the-box performance

Simply looking at the number of simulation steps covered—or, alternatively, at the number of steps with no coverage, where less is more—the MITRE evaluation showed MTP provided the best protection with zero delays or configuration changes.

Microsoft believes protection must be durable without requiring a lot of SOC configuration changes (especially during an ongoing attack), and it should not create friction by delivering false positives.

The chart below shows Microsoft as the vendor with the least number of steps categorized as “None” (also referred to as “misses”) out of the box. The chart also shows the number of detections marked with “Configuration Change” modifier, which was done quite considerably, as well as delayed detections (“Delayed” modifier), which indicate in-flight modifications and latency in detections.

Microsoft is one of only three vendors that made no modifications or had any delays during the test.

Microsoft is one of only three vendors that made no modifications or had any delays during the test.

Similarly, when looking at visibility and coverage for the 57 MITRE ATT&CK techniques replicated during this APT29 simulation, Microsoft’s coverage shows top performance at 95 percent of the techniques covered, as shown in the chart below.

A product’s coverage of techniques is an important consideration for customers when evaluating security solutions, often with specific attacker(s) in mind, which in turn determines the attacker techniques they are most concerned with and, consequently, the coverage they most care about.
Figure 5: Coverage across all attack techniques in the evaluation.

Figure 5: Coverage across all attack techniques in the evaluation.

MTP provided unique detection and visibility across identity, cloud, and endpoints

The powerful capabilities of Microsoft Threat Protection originate from unique signals not just from endpoints but also from identity and cloud apps. This combination of capabilities provides coverage where other solutions may lack visibility. Below are three examples of sophisticated attacks simulated during the evaluation that span across domains (i.e., identity, cloud, endpoint) and showcase the unique visibility and unmatched detections provided by MTP:

  • Detecting the most dangerous lateral movement attack: Golden Ticket—Unlike other vendors, MTP’s unique approach for detecting Golden Ticket attacks does not solely rely on endpoint-based command-line sequences, PowerShell strings like “Invoke-Mimikatz”, or DLL-loading heuristics that can all be evaded by advanced attackers. MTP leverages direct optics into the Domain Controller via Azure ATP, the identity component of MTP. Azure ATP detects Golden Ticket attacks using a combination of machine learning and protocol heuristics by looking at anomalies such as encryption downgrade, forged authorization data, nonexistent account, ticket anomaly, and time anomaly. MTP is the only product that provided the SOC context of the encryption downgrade, together with the source and target machines, resources accessed, and the identities involved.
  • Exfiltration over alternative protocol: Catching and stopping attackers as they move from endpoint to cloud—MTP leverages exclusive signal from Microsoft Cloud App Security (MCAS), the cloud access security broker (CASB) component of MTP, which provides visibility and alerts for a large variety of cloud services, including OneDrive. Using the MCAS Conditional Access App Control mechanism, MTP was able to monitor cloud traffic for data exfiltration and raise an automatic alert when a ZIP archive with stolen files was exfiltrated to a remote OneDrive account controlled by the attacker. It is important to note the OneDrive account used by MITRE Redteam was unknown to the Microsoft team prior to being automatically detected during the evaluation.
  • Uncovering Remote System Discovery attacks that abuse LDAP—Preceding lateral movement, attackers commonly abuse the Lightweight Directory Access Protocol (LDAP) protocol to query user groups and user information. Microsoft introduced a powerful new sensor for unique visibility of LDAP queries, aiding security analyst investigation and allowing detection of suspicious patterns of LDAP activity. Through this sensor, Microsoft Defender ATP, the endpoint component of MTP, avoids reliance on PowerShell strings and snippets. Rather, Microsoft Defender ATP uses the structure and fields of each LDAP query originating from the endpoint to the Domain Controller (DC) to spot broad requests or suspicious queries for accounts and groups. Where possible, MTP also combines and correlates LDAP attacks detected on the endpoint by Microsoft Defender ATP with LDAP events seen on the DC by Azure ATP.

Figure 6: Golden Ticket alert based on optics on Domain Controller activity.

Figure 6: Golden Ticket alert based on optics on Domain Controller activity.

Figure 7: Suspicious LDAP activity detected using deep native OS sensor.

Figure 7: Suspicious LDAP activity detected using deep native OS sensor.

Microsoft Threat Experts: Threat context and hunting skills when and where needed

In this edition of MITRE ATT&CK evaluation, for the first time, Microsoft products were configured to take advantage of the managed threat hunting service Microsoft Threat Experts. Microsoft Threat Experts provides proactive hunting for the most important threats in the network, including human adversary intrusions, hands-on-keyboard attacks, or advanced attacks like cyberespionage. During the evaluation, the service operated with the same strategy normally used in real customer incidents: the goal is to send targeted attack notifications that provide real value to analysts with contextual analysis of the activities. Microsoft Threat Experts enriches security signals and raises the risk level appropriately so that the SOC can focus on what’s important, and breaches don’t go unnoticed.

Microsoft Threat Experts notifications stand out among other participating vendors as these notifications are fully integrated into the experience, incorporated into relevant incidents and connected to relevant events, alerts, and other evidence. Microsoft Threat Experts is enabling SOC teams to effortlessly and seamlessly receive and merge additional data and recommendations in the context of the incident investigation.

Figure 8: Microsoft Threat Experts alert integrates into the portal and provides hyperlinked rich context.

Figure 8: Microsoft Threat Experts alert integrates into the portal and provides hyperlinked rich context.

Transparency in testing is key to threat detection, prevention

Microsoft Threat Protection delivers real-world detection, response, and, ultimately, protection from advanced attacks, as demonstrated in the latest MITRE evaluation. Core to MITRE’s testing approach is emulating real-world attacks to understand whether solutions are able to adequately detect and respond to them. We saw that Microsoft Threat Protection provided clear detection across all categories and delivered additional context that shows the full scope of impact across an entire environment. MTP empowers customers not only to detect attacks, offering human experts as needed, and easily return to a secured state with automated remediation. As is true in the real world, our human Threat Experts were available on demand to provide even more context and help with.

We thank MITRE for the opportunity to contribute to the test with unique threat intelligence that only three participants stepped forward to share. Our unique intelligence and breadth of signal and visibility across the entire environment is what enables us to continuously score top marks. We look forward to participating in the next evaluation, and we welcome your feedback and partnership throughout our journey.

Thanks,

Moti and the entire Microsoft Threat Protection team

Related Links:

 

The post Microsoft Threat Protection leads in real-world detection in MITRE ATT&CK evaluation appeared first on Microsoft Security.

Zero Trust Deployment Guide for Microsoft Azure Active Directory

April 30th, 2020 No comments

Microsoft is providing a series of deployment guides for customers who have engaged in a Zero Trust security strategy. In this guide, we cover how to deploy and configure Azure Active Directory (Azure AD) capabilities to support your Zero Trust security strategy.

For simplicity, this document will focus on ideal deployments and configuration. We will call out the integrations that need Microsoft products other than Azure AD and we will note the licensing needed within Azure AD (Premium P1 vs P2), but we will not describe multiple solutions (one with a lower license and one with a higher license).

Azure AD at the heart of your Zero Trust strategy

Azure AD provides critical functionality for your Zero Trust strategy. It enables strong authentication, a point of integration for device security, and the core of your user-centric policies to guarantee least-privileged access. Azure AD’s Conditional Access capabilities are the policy decision point for access to resources based on user identity, environment, device health, and risk—verified explicitly at the point of access. In the following sections, we will showcase how you can implement your Zero Trust strategy with Azure AD.

Establish your identity foundation with Azure AD

A Zero Trust strategy requires that we verify explicitly, use least privileged access principles, and assume breach. Azure Active Directory can act as the policy decision point to enforce your access policies based on insights on the user, device, target resource, and environment. To do this, we need to put Azure Active Directory in the path of every access request—connecting every user and every app or resource through this identity control plane. In addition to productivity gains and improved user experiences from single sign-on (SSO) and consistent policy guardrails, connecting all users and apps provides Azure AD with the signal to make the best possible decisions about the authentication/authorization risk.

  • Connect your users, groups, and devices:
    Maintaining a healthy pipeline of your employees’ identities as well as the necessary security artifacts (groups for authorization and devices for extra access policy controls) puts you in the best place to use consistent identities and controls, which your users already benefit from on-premises and in the cloud:

    1. Start by choosing the right authentication option for your organization. While we strongly prefer to use an authentication method that primarily uses Azure AD (to provide you the best brute force, DDoS, and password spray protection), follow our guidance on making the decision that’s right for your organization and your compliance needs.
    2. Only bring the identities you absolutely need. For example, use going to the cloud as an opportunity to leave behind service accounts that only make sense on-premises; leave on-premises privileged roles behind (more on that under privileged access), etc.
    3. If your enterprise has more than 100,000 users, groups, and devices combined, we recommend you follow our guidance building a high performance sync box that will keep your life cycle up-to-date.
  • Integrate all your applications with Azure AD:
    As mentioned earlier, SSO is not only a convenient feature for your users, but it’s also a security posture, as it prevents users from leaving copies of their credentials in various apps and helps avoid them getting used to surrendering their credentials due to excessive prompting. Make sure you do not have multiple IAM engines in your environment. Not only does this diminish the amount of signal that Azure AD sees and allow bad actors to live in the seams between the two IAM engines, it can also lead to poor user experience and your business partners becoming the first doubters of your Zero Trust strategy. Azure AD supports a variety of ways you can bring apps to authenticate with it:

    1. Integrate modern enterprise applications that speak OAuth2.0 or SAML.
    2. For Kerberos and Form-based auth applications, you can integrate them using the Azure AD Application Proxy.
    3. If you publish your legacy applications using application delivery networks/controllers, Azure AD is able to integrate with most of the major ones (such as Citrix, Akamai, F5, etc.).
    4. To help migrate your apps off of existing/older IAM engines, we provide a number of resources—including tools to help you discover and migrate apps off of ADFS.
  • Automate provisioning to applications:
    Once you have your users’ identities in Azure AD, you can now use Azure AD to power pushing those user identities into your various cloud applications. This gives you a tighter identity lifecycle integration within those apps. Use this detailed guide to deploy provisioning into your SaaS applications.
  • Get your logging and reporting in order:
    As you build your estate in Azure AD with authentication, authorization, and provisioning, it’s important to have strong operational insights into what is happening in the directory. Follow this guide to learn how to to persist and analyze the logs from Azure AD either in Azure or using a SIEM system of choice.

Enacting the 1st principle: least privilege

Giving the right access at the right time to only those who need it is at the heart of a Zero Trust philosophy:

  • Plan your Conditional Access deployment:
    Planning your Conditional Access policies in advance and having a set of active and fallback policies is a foundational pillar of your Access Policy enforcement in a Zero Trust deployment. Take the time to configure your trusted IP locations in your environment. Even if you do not use them in a Conditional Access policy, configure these IPs informs the risk of Identity Protection mentioned above. Check out our deployment guidance and best practices for resilient Conditional Access policies.
  • Secure privileged access with privileged identity management:
    With privileged access, you generally take a different track to meeting the end users where they are most likely to need and use the data. You typically want to control the devices, conditions, and credentials that users use to access privileged operations/roles. Check out our detailed guidance on how to take control of your privileged identities and secure them. Keep in mind that in a digitally transformed organization, privileged access is not only administrative access, but also application owner or developer access that can change the way your mission critical apps run and handle data. Check out our detailed guide on how to use Privileged Identity Management (P2) to secure privileged identities.
  • Restrict user consent to applications:
    User consent to applications is a very common way for modern applications to get access to organizational resources. However, we recommend you restrict user consent and manage consent requests to ensure that no unnecessary exposure of your organization’s data to apps occurs. This also means that you need to review prior/existing consent in your organization for any excessive or malicious consent.
  • Manage entitlements (Azure AD Premium P2):
    With applications centrally authenticating and driven from Azure AD, you should streamline your access request, approval, and recertification process to make sure that the right people have the right access and that you have a trail of why users in your organization have the access they have. Using entitlement management, you can create access packages that they can request as they join different teams/project and that would assign them access to the associated resources (applications, SharePoint sites, group memberships). Check out how you can start a package. If deploying entitlement management is not possible for your organization at this time, we recommend you at least enable self-service paradigms in your organization by deploying self-service group management and self-service application access.

Enacting the 2nd principle: verify explicitly

Provide Azure AD with a rich set of credentials and controls that it can use to verify the user at all times.

  • Roll out Azure multi-factor authentication (MFA) (P1):
    This is a foundational piece of reducing user session risk. As users appear on new devices and from new locations, being able to respond to an MFA challenge is one of the most direct ways that your users can teach us that these are familiar devices/locations as they move around the world (without having administrators parse individual signals). Check out this deployment guide.
  • Enable Azure AD Hybrid Join or Azure AD Join:
    If you are managing the user’s laptop/computer, bringing that information into Azure AD and use it to help make better decisions. For example, you may choose to allow rich client access to data (clients that have offline copies on the computer) if you know the user is coming from a machine that your organization controls and manages. If you do not bring this in, you will likely choose to block access from rich clients, which may result in your users working around your security or using Shadow IT. Check out our resources for Azure AD Hybrid Join or Azure AD Join.
  • Enable Microsoft Intune for managing your users’ mobile devices (EMS):
    The same can be said about user mobile devices as laptops. The more you know about them (patch level, jailbroken, rooted, etc.) the more you are able to trust or mistrust them and provide a rationale for why you block/allow access. Check out our Intune device enrollment guide to get started.
  • Start rolling out passwordless credentials:
    With Azure AD now supporting FIDO 2.0 and passwordless phone sign-in, you can move the needle on the credentials that your users (especially sensitive/privileged users) are using on a day-to-day basis. These credentials are strong authentication factors that can mitigate risk as well. Our passwordless authentication deployment guide walks you through how to roll out passwordless credentials in your organization.

Enacting the 3rd principle: assume breach

Provide Azure AD with a rich set of credentials and controls that it can use to verify the user.

  • Deploy Azure AD Password Protection:
    While enabling other methods to verify users explicitly, you should not forget about weak passwords, password spray and breach replay attacks. Read this blog to find out why classic complex password policies are not tackling the most prevalent password attacks. Then follow this guidance to enable Azure AD Password Protection for your users in the cloud first and then on-premises as well.
  • Block legacy authentication:
    One of the most common attack vectors for malicious actors is to use stolen/replayed credentials against legacy protocols, such as SMTP, that cannot do modern security challenges. We recommend you block legacy authentication in your organization.
  • Enable identity protection (Azure AD Premium 2):
    Enabling identity protection for your users will provide you with more granular session/user risk signal. You’ll be able to investigate risk and confirm compromise or dismiss the signal which will help the engine understand better what risk looks like in your environment.
  • Enable restricted session to use in access decisions:
    To illustrate, let’s take a look at controls in Exchange Online and SharePoint Online (P1): When a user’s risk is low but they are signing in from an unknown device, you may want to allow them access to critical resources, but not allow them to do things that leave your organization in a non-compliant state. Now you can configure Exchange Online and SharePoint Online to offer the user a restricted session that allows them to read emails or view files, but not download them and save them on an untrusted device. Check out our guides for enabling limited access with SharePoint Online and Exchange Online.
  • Enable Conditional Access integration with Microsoft Cloud App Security (MCAS) (E5):
    Using signals emitted after authentication and with MCAS proxying requests to application, you will be able to monitor sessions going to SaaS Applications and enforce restrictions. Check out our MCAS and Conditional Access integration guidance and see how this can even be extended to on-premises apps.
  • Enable Microsoft Cloud App Security (MCAS) integration with identity protection (E5):
    Microsoft Cloud App Security is a UEBA product monitoring user behavior inside SaaS and modern applications. This gives Azure AD signal and awareness about what happened to the user after they authenticated and received a token. If the user pattern starts to look suspicious (user starts to download gigabytes of data from OneDrive or starts to send spam emails in Exchange Online), then a signal can be fed to Azure AD notifying it that the user seems to be compromised or high risk and on the next access request from this user; Azure AD can take correct action to verify the user or block them. Just enabling MCAS monitoring will enrich the identity protection signal. Check out our integration guidance to get started.
  • Integrate Azure Advanced Threat Protection (ATP) with Microsoft Cloud App Security:
    Once you’ve successfully deployed and configured Azure ATP, enable the integration with Microsoft Cloud App Security to bring on-premises signal into the risk signal we know about the user. This enables Azure AD to know that a user is indulging in risky behavior while accessing on-premises, non-modern resources (like File Shares) which can then be factored into overall user risk to block further access in the cloud. You will be able to see a combined Priority Score for each user at risk to give a holistic view of which ones your SOC should focus on.
  • Enable Microsoft Defender ATP (E5):
    Microsoft Defender ATP allows you to attest to Windows machines health and whether they are undergoing a compromise and feed that into mitigating risk at runtime. Whereas Domain Join gives you a sense of control, Defender ATP allows you to react to a malware attack at near real time by detecting patterns where multiple user devices are hitting untrustworthy sites and react by raising their device/user risk at runtime. See our guidance on configuring Conditional Access in Defender ATP.

Conclusion

We hope the above guides help you deploy the identity pieces central to a successful Zero Trust strategy. Make sure to check out the other deployment guides in the series by following the Microsoft Security blog.

The post Zero Trust Deployment Guide for Microsoft Azure Active Directory appeared first on Microsoft Security.

Managing risk in today’s IoT landscape: not a one-and-done

April 28th, 2020 No comments

image for Halina's Blog Post_updated-BANNER

The reality of securing IoT over time

It’s difficult to imagine any aspect of everyday life that isn’t affected by the influence of connectivity. The number of businesses that are using IoT is growing at a fast pace. By 2021, approximately 94 percent of businesses will be using IoT. Connectivity empowers organizations to unlock the full potential of the Internet of Things (IoT)—but it also introduces new cybersecurity attack vectors that they didn’t need to think about before. The reality is, connectivity comes at a cost: attackers with a wide range of motivations and skills are on the hunt, eager to exploit vulnerabilities or weak links in IoT. What does it take to manage those risks?

The cybersecurity threat landscape is ever evolving so a solution’s protection must also evolve regularly in order to remain effective. Securing a device is neither a one-time action nor is it a problem that is solely technical in nature. Implementing robust security measures upfront is not enough—risks need to be mitigated not just once, but constantly and throughout the full lifespan of a device. Facing this threat landscape ultimately means acknowledging that organizations will have to confront the consequences of attacks and newfound vulnerabilities. The question is, how to manage those risks beyond the technical measures that are in place?

A holistic approach to minimizing risk

Securing IoT devices against cyberattacks requires a holistic approach that complements up-front technical measures with ongoing practices that allow organizations to evaluate risks and establish a set of actions and policies that minimize threats over time. Cybersecurity is a multi-dimensional issue that requires the provider of an IoT solution to take several variables into account—it is not just the technology, but also the people who create and manage a product and the processes and practices they put in place, that will determine how resilient it is.

With Azure Sphere, we provide our customers with a robust defense that utilizes the evidence and learnings documented in the Seven Properties of Highly Secured Devices. One of the properties, renewable security, ensures that a device can update to a more secure state even after it has been compromised. As the threat landscape evolves, renewable security also enables us to counter new attack vectors through updates. This is essential, but not sufficient on its own. Our technology investments are enhanced through similar investments in security assurance and risk management that permeate all levels of an organization. The following sections highlight three key elements of our holistic approach to IoT security: continuous evaluation of our security promise, leveraging the power of the security community, and combining cyber and organizational resilience. 

Continuous evaluation of our security promise

All cyberattacks fall somewhere on a spectrum of complexity. On one side of the spectrum are simple and opportunistic attacks. Examples are off-the-shelf malware or attempts to steal data such as credentials. These attacks are usually performed by attackers with limited resources. On the opposite side of the spectrum are threat actors that use highly sophisticated methods to target specific parts of the system. Attackers within this category usually have many resources and can pursue an attack over a longer period of time. Given the multitude of threats across this spectrum, it is important to keep in mind that they all have one thing in common: an attacker faces relatively low risk with potentially very large rewards.

Taking this into account, we believe that in order to protect our customers we need to practice being our own worst enemy. This means our goal is to discover any vulnerabilities before the bad guys do. One proven approach is to test our solution from the same perspective as an attacker. So-called “red teams” are designed to emulate the attacks of adversaries, whereas “purple teams” perform both attacking and defending to harden a product from within.

Our approach to red team exercises is to try to mimic the threat landscape that devices are actually facing. We do this multiple times a year and across the full Azure Sphere stack. This means that our customers benefit from the rigorous security testing of our platform and are able to focus on the security of their own applications. We work with the world’s most renowned security service providers to test our product with a real-world attacker mentality for an extended period of time and from multiple perspectives. In addition, we leverage the full power of Microsoft internal security expertise to conduct regular internal red and purple team exercises. The practice of constantly evaluating our defense and emulating the ever-evolving threat landscape is an important part of our security hygiene—allowing us to find vulnerabilities, update all devices, and mitigate incidents before they even happen.

Leveraging the power of the security community

Another approach to finding vulnerabilities before attackers do is to engage with the cybersecurity community through bounty programs. We encourage security researchers with an interest in Azure Sphere to search for any vulnerabilities and we reward them for it. While our approach to red team exercises ensures regular testing of how we secure Azure Sphere, we also believe in the advantages of the continual and diverse assessment by anyone who is interested, at any point in time.

Security researchers play a significant role in securing our billions of customers across Microsoft, and we encourage the responsible reporting of vulnerabilities based on our Coordinated Vulnerability Disclosure (CVD). We invite researchers from across the world to look for and report any vulnerability through our Microsoft Azure Bounty Program. Depending on the quality of submissions and the level of severity, we award successful reports with up to $40,000 USD. We believe that researchers should be rewarded competitively when they improve the security of our platform, and we maintain these important relationships for the benefit of our customers.

From a risk management perspective, both red and purple team exercises and bug bounties are helpful tools to minimize the risk of attacks. But what happens when an IoT solution provider is confronted with a newly discovered security vulnerability? Not every organization has a cybersecurity incident response plan in place, and 77 percent of businesses do not have a consistently deployed plan. Finding vulnerabilities is important, but it is equally important to prepare employees and equip the organization with processes and practices that allow for a quick and efficient resolution as soon as a vulnerability is found.

Combining cyber and organizational resilience

Securing IoT is not just about preventing attackers from getting in; it’s also about how to respond when they do. Once the technical barrier has been passed, it is the resilience of the organization that the device has to fall back on. Therefore, it is essential to have a plan in place that allows your team to quickly respond and restore security. There are countless possible considerations and moving parts that must all fit together seamlessly as part of a successful cybersecurity incident response. Every organization is different and there is no one-size-fits-all, but a good place to start is with industry best practices such as the National Institute of Standards and Technology (NIST) Computer Security Incident Handling Guide. Azure Sphere’s standard operating procedures are aligned with those guidelines, in addition to leveraging Microsoft battle-tested corporate infrastructure.

Microsoft Security Response Center (MSRC) has been at the front line of security response for more than twenty years. Over time we have learned what it means to successfully protect our customers from harm from vulnerabilities in our products, and we are able to rapidly drive back attacks against our cloud infrastructure. Security researchers and customers are provided with an easy way to report any vulnerabilities and MSRC best-in-class security experts are monitoring communications 24/7 to make sure we can fix an issue as soon as possible.

Your people are a critical asset—when they’re educated on how to respond when an incident occurs, their actions can make all the difference. In addition to MSRC capabilities that are available at any time, we require everyone involved in security incident response to undergo regular and extensive training. Trust is easy to build when things are going right. What really matters in the long term is how we build trust when things go wrong. Our security response practices have been defined with that in mind.

Our commitment to managing the risks you are facing

The world will be more connected than it has ever been, and we believe this requires a strong, holistic, and ongoing focus on cybersecurity. Defending against today’s and tomorrow’s IoT threat landscape is not a static game. It requires continual assessment of our promise to secure your IoT solutions, innovation that improves our defense over time, and working with you and the security community. As the threat landscape evolves, so will we. Azure Sphere’s mission is to empower every organization on the planet to connect and create secured and trustworthy IoT devices. When you choose Azure Sphere, you can rely on our team and Microsoft to manage your risk so that you can focus on the true business value of your IoT solutions and products.

If you are interested in learning more about how Azure Sphere can help you securely unlock your next IoT innovation:

The post Managing risk in today’s IoT landscape: not a one-and-done appeared first on Microsoft Security.

Protecting your organization against password spray attacks

April 23rd, 2020 No comments

When hackers plan an attack, they often engage in a numbers game. They can invest significant time pursing a single, high-value target—someone in the C-suite for example and do “spear phishing.” Or if they just need low-level access to gain a foothold in an organization or do reconnaissance, they target a huge volume of people and spend less time on each one which is called “password spray.” Last December Seema Kathuria and I described an example of the first approach in Spear phishing campaigns—they’re sharper than you think! Today, I want to talk about a high-volume tactic: password spray.

In a password spray attack, adversaries “spray” passwords at a large volume of usernames. When I talk to security professionals in the field, I often compare password spray to a brute force attack. Brute force is targeted. The hacker goes after specific users and cycles through as many passwords as possible using either a full dictionary or one that’s edited to common passwords. An even more targeted password guessing attack is when the hacker selects a person and conducts research to see if they can guess the user’s password—discovering family names through social media posts, for example. And then trying those variants against an account to gain access. Password spray is the opposite. Adversaries acquire a list of accounts and attempt to sign into all of them using a small subset of the most popular, or most likely, passwords. Until they get a hit. This blog describes the steps adversaries use to conduct these attacks and how you can reduce the risk to your organization.

Three steps to a successful password spray attack

Step 1: Acquire a list of usernames

It starts with a list of accounts. This is easier than it sounds. Most organizations have a formal convention for emails, such as firstname.lastname@company.com. This allows adversaries to construct usernames from a list of employees. If the bad actor has already compromised an account, they may try to enumerate usernames against the domain controller. Or, they find or buy usernames online. Data can be compiled from past security breaches, online profiles, etc. The adversary might even get some verified profiles for free!

Step 2: Spray passwords

Finding a list of common passwords is even easier. A Bing search reveals that publications list the most common passwords each year. 123456, password, and qwerty are typically near the top. Wikipedia lists the top 10,000 passwords. There are regional differences that may be harder to discovery, but many people use a favorite sports teams, their state, or company as a password. For example, Seahawks is a popular password choice in the Seattle area. Once hackers do their research, they carefully select a password and try it against the entire list of accounts as shown in Figure 1. If the attack is not successful, they wait 30 minutes to avoid triggering a timeout, and then try the next password.

Protecting your organization against password spray attacks

Figure 1:  Password spray using one password across multiple accounts.

Step 3: Gain access

Eventually one of the passwords works against one of the accounts. And that’s what makes password spray a popular tactic—attackers only need one successful password + username combination. Once they have it, they can access whatever the user has access to, such as cloud resources on OneDrive. Or use the exploited account to do internal reconnaissance on the target network and get deeper into the systems via elevation of privilege.

Even if the vast majority of your employees don’t use popular passwords, there is a risk that hackers will find the ones that do. The trick is to reduce the number of guessable passwords used at your organization.

Configure Azure Active Directory (Azure AD) Password Protection

Azure AD Password Protection allows you to eliminate easily guessed passwords and customize lockout settings for your environment. This capability includes a globally banned password list that Microsoft maintains and updates. You can also block a custom list of passwords that are relevant to your region or company. Once enabled, users won’t be able to choose a password on either of these lists, making it significantly less likely that an adversary can guess a user’s password. You can also use this feature to define how many sign-in attempts will trigger a lockout and how long the lockout will last.

Simulate attacks with Office 365 Advanced Threat Protection (Office 365 ATP)

Attack Simulator in Office 365 ATP lets you run realistic, but simulated phishing and password attack campaigns in your organization. Pick a password and then run the campaign against as many users as you want. The results will let you know how many people are using that password. Use the data to train users and build your custom list of banned passwords.

Begin your passwordless journey

The best way to reduce your risk of password spray is to eliminate passwords entirely. Solutions like Windows Hello or FIDO2 security keys let users sign in using biometrics and/or a physical key or device. Get started by enabling Multi-Factor Authentication (MFA) across all your accounts. MFA requires that users sign in with at least two authentication factors: something they know (like a password or PIN), something they are (such as biometrics), and/or something they have (such as a trusted device).

Learn more

We make progress in cybersecurity by increasing how much it costs the adversary to conduct the attack. If we make guessing passwords too hard, hackers will reduce their reliance on password spray.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. For more information about our security solutions visit our website. Or reach out to me on LinkedIn or Twitter.

The post Protecting your organization against password spray attacks appeared first on Microsoft Security.

Defending the power grid against supply chain attacks: Part 3 – Risk management strategies for the utilities industry

April 22nd, 2020 No comments

Over the last fifteen years, attacks against critical infrastructure (figure1) have steadily increased in both volume and sophistication. Because of the strategic importance of this industry to national security and economic stability, these organizations are targeted by sophisticated, patient, and well-funded adversaries.  Adversaries often target the utility supply chain to insert malware into devices destined for the power grid. As modern infrastructure becomes more reliant on connected devices, the power industry must continue to come together to improve security at every step of the process.

Aerial view of port and freeways leading to downtown Singapore.

Figure 1: Increased attacks on critical infrastructure

This is the third and final post in the “Defending the power grid against supply chain attacks” series. In the first blog I described the nature of the risk. Last month I outlined how utility suppliers can better secure the devices they manufacture. Today’s advice is directed at the utilities. There are actions you can take as individual companies and as an industry to reduce risk.

Implement operational technology security best practices

According to Verizon’s 2019 Data Breach Investigations Report, 80 percent of hacking-related breaches are the result of weak or compromised passwords. If you haven’t implemented multi-factor authentication (MFA) for all your user accounts, make it a priority. MFA can significantly reduce the likelihood that a user with a stolen password can access your company assets. I also recommend you take these additional steps to protect administrator accounts:

  • Separate administrative accounts from the accounts that IT professionals use to conduct routine business. While administrators are answering emails or conducting other productivity tasks, they may be targeted by a phishing campaign. You don’t want them signed into a privileged account when this happens.
  • Apply just-in-time privileges to your administrator accounts. Just-in-time privileges require that administrators only sign into a privileged account when they need to perform a specific administrative task. These sign-ins go through an approval process and have a time limit. This will reduce the possibility that someone is unnecessarily signed into an administrative account.

 

Image 2

Figure 2: A “blue” path depicts how a standard user account is used for non-privileged access to resources like email and web browsing and day-to-day work. A “red” path shows how privileged access occurs on a hardened device to reduce the risk of phishing and other web and email attacks. 

  • You also don’t want the occasional security mistake like clicking on a link when administrators are tired or distracted to compromise the workstation that has direct access to these critical systems.  Set up privileged access workstations for administrative work. A privileged access workstation provides a dedicated operating system with the strongest security controls for sensitive tasks. This protects these activities and accounts from the internet. To encourage administrators to follow security practices, make sure they have easy access to a standard workstation for other more routine tasks.

The following security best practices will also reduce your risk:

  • Whitelist approved applications. Define the list of software applications and executables that are approved to be on your networks. Block everything else. Your organization should especially target systems that are internet facing as well as Human-Machine Interface (HMI) systems that play the critical role of managing generation, transmission, or distribution of electricity
  • Regularly patch software and operating systems. Implement a monthly practice to apply security patches to software on all your systems. This includes applications and Operating Systems on servers, desktop computers, mobile devices, network devices (routers, switches, firewalls, etc.), as well as Internet of Thing (IoT) and Industrial Internet of Thing (IIoT) devices. Attackers frequently target known security vulnerabilities.
  • Protect legacy systems. Segment legacy systems that can no longer be patched by using firewalls to filter out unnecessary traffic. Limit access to only those who need it by using Just In Time and Just Enough Access principles and requiring MFA. Once you set up these subnets, firewalls, and firewall rules to protect the isolated systems, you must continually audit and test these controls for inadvertent changes, and validate with penetration testing and red teaming to identify rogue bridging endpoint and design/implementation weaknesses.
  • Segment your networks. If you are attacked, it’s important to limit the damage. By segmenting your network, you make it harder for an attacker to compromise more than one critical site. Maintain your corporate network on its own network with limited to no connection to critical sites like generation and transmission networks. Run each generating site on its own network with no connection to other generating sites. This will ensure that should a generating site become compromised, attackers can’t easily traverse to other sites and have a greater impact.
  • Turn off all unnecessary services. Confirm that none of your software has automatically enabled a service you don’t need. You may also discover that there are services running that you no longer use. If the business doesn’t need a service, turn it off.
  • Deploy threat protection solutions. Services like Microsoft Threat Protection help you automatically detect, respond to, and correlate incidents across domains.
  • Implement an incident response plan: When an attack happens, you need to respond quickly to reduce the damage and get your organization back up and running. Refer to Microsoft’s Incident Response Reference Guide for more details.

Speak with one voice

Power grids are interconnected systems of generating plants, wires, transformers, and substations. Regional electrical companies work together to efficiently balance the supply and demand for electricity across the nation. These same organizations have also come together to protect the grid from attack. As an industry, working through organizations like the Edison Electric Institute (EEI), utilities can define security standards and hold manufacturers accountable to those requirements.

It may also be useful to work with The Federal Energy Regulatory Committee (FERC), The North American Electric Reliability Corporation (NERC), or The United States Nuclear Regulatory Commission (U.S. NRC) to better regulate the security requirements of products manufactured for the electrical grid.

Apply extra scrutiny to IoT devices

As you purchase and deploy IoT devices, prioritize security. Be careful about purchasing products from countries that are motivated to infiltrate critical infrastructure. Conduct penetration tests against all new IoT and IIoT devices before you connect them to the network. When you place sensors on the grid, you’ll need to protect them from both cyberattacks and physical attacks. Make them hard to reach and tamper-proof.

Collaborate on solutions

Reducing the risk of a destabilizing power grid attack will require everyone in the utility industry to play a role. By working with manufacturers, trade organizations, and governments, electricity organizations can lead the effort to improve security across the industry. For utilities in the United States, several public-private programs are in place to enhance the utility industry capabilities to defend its infrastructure and respond to threats:

Read Part 1 in the series: “Defending the power grid against cyberattacks

Read “Defending the power grid against supply chain attacks: Part 2 – Securing hardware and software

Read how Microsoft Threat Protection can help you better secure your endpoints.

Learn how MSRC developed an incident response plan

Bookmark the Security blog to keep up with our expert coverage on security matters. For more information about our security solutions visit our website. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Defending the power grid against supply chain attacks: Part 3 – Risk management strategies for the utilities industry appeared first on Microsoft Security.

MITRE ATT&CK APT 29 evaluation proves Microsoft Threat Protection provides deeper end to end view of advanced threats

April 21st, 2020 No comments

As attackers use more advanced techniques, it’s even more important that defenders have visibility not just into each of the domains in their environment, but also across them to piece together coordinated, targeted, and advanced attacks. This level of visibility will allow us to get ahead of attackers and close the gaps through which they enter. To illustrate that imperative, the 2019 MITRE ATT&CK evaluation centered on an advanced nation-state threat actor known to the industry as Advanced Persistent Threat (APT) 29 (also known as Cozy Bear) which largely overlaps with the activity group that Microsoft calls YTTRIUM. . The test involved a simulation of 58 attacker techniques in 10 kill chain categories.

Microsoft participated in the second MITRE ATT&CK endpoint detection product evaluation published today. The evaluation is designed to test security products based on the ATT&CK (Adversarial Tactics, Techniques & Common Knowledge) framework, which is highly regarded in the security industry as one of the most comprehensive catalog of attacker techniques and tactics. Threat hunters use this framework to look for specific techniques that attackers often use to penetrate defenses. Testing that incorporates a comprehensive view of an environment’s ability to monitor and detect malicious activity with the existing tools that defenders have deployed across an organization is critical.

Although this test was focused on endpoint detection and response, MITRE ran the simulated APT29 attack from end to end and across multiple attack domains, meaning defenders benefited from visibility beyond just endpoint protection. This gave Microsoft the unique opportunity to bring Microsoft Threat Protection (MTP) to the test.

Microsoft Threat Protection expands Microsoft Defender ATP from endpoint detection and response (EDR) to an extended detection and response (XDR) solution, and is designed to provide extended detection and response by combining protection for endpoints (Microsoft Defender ATP), email and productivity tools (Office 365 ATP), identity (Azure ATP), and cloud applications (Microsoft Cloud App Security/MCAS). As customers face attacks across endpoints, cloud, applications and identities, MTP looks across these domains to understand the entire chain of events, identifies affected assets, like users, endpoints, mailboxes, and applications, and auto-heals them back to a safe state.

Microsoft Threat Protection delivers coverage across the entire kill chain, not just the endpoint

To fully execute the end to end attack simulation of APT29, MITRE required participants to turn off all proactive protection and blocking capabilities. For Microsoft Threat Protection, this meant that all the capabilities that would normally block this kind of attack such as automatic remediation flows, application isolation, attack surface reduction, network protection, exploit protection, controlled folder access, and next-gen antivirus prevention were turned off. However, Microsoft Threat Protection audit capabilities for these features enabled recording of a variety of points during the attack when MTP (had it been fully enabled) would have prevented or blocked execution, likely stopping the attack in its tracks.

During this evaluation Microsoft Threat Protection delivered on providing the deep and broad optics, near real time detection through automation, and a complete, end-to-end view of the attack story. Here is how Microsoft Threat Protection stood out:

  • Depth and breadth of optics: Our uniquely integrated operating system, directory, and cloud sensors contributed deep and broad telemetry coverage. AI-driven, cloud-powered models collaborating across domains identified malicious activities and raised alerts on attacker techniques across the entire attack kill chain:
    • Microsoft Defender ATP recorded and alerted on endpoint activities including advanced file-less techniques, privilege escalation, and credential theft and persistence – leveraging deep sensors like AMSI, WMI, and LDAP.
    • Azure ATP watched and detected account compromise at the domain level, and lateral movement, such as pass-the-hash and the more sophisticated pass-the-ticket (Golden Ticket attack).
    • Microsoft Cloud App Security identified exfiltration of data to the cloud (OneDrive).
  • Detection and containment in near real time:Nation state attacks of this magnitude can take place over the course of as little as a few hours, which means that Security Operations Centers (SOCs) often have little to no time to respond. Near-real-time automated detection of advanced techniques is critical to address this challenge. Where possible, active blocking, prevention and automatic containment will make the difference between an attempted versus a successful compromise. MTP’s prevention capabilities along with fast detection and behavioral blocking are exactly designed for this purpose.
  • A complete attack story: Throughout this evaluation, Microsoft Defender ATP, Azure ATP, and Microsoft Cloud App Security, combined with the expertise of Microsoft Threat Experts generated nearly 80 alerts – for SOC teams, manually following up on each one of these alerts is overwhelming. MTP consolidated the alerts into just two incidents, dramatically simplifying the volume of triage and investigation work needed. This gives the SOC the ability to prioritize and address the incident as a whole and enables streamlined triage, investigation, and automated response process against the complete attack. With MTP we have built in automation that identifies the complex links between attacker activities and builds correlations across domains that piece together the attack story with all of its related alerts, telemetry, evidence and affected assets into coherent incidents. These comprehensive incidents are then prioritized and escalated to the SOC.

 

Microsoft Threat Experts, our managed threat hunting service, also participated in the evaluation this year. Our security experts watched over the signals collected in real time and generated comprehensive, complementary alerts, which enriched the automated detections with additional details, insights and recommendations for the SOC.

Real world testing is critical

Attackers are using advanced, persistent, and intelligent techniques to penetrate today’s defenses. This method of testing leans heavily into real-world exploitations rather than those found solely in a lab or simulated testing environment. Having been part of the inaugural round of the MITRE ATT&CK evaluation in 2018, Microsoft enthusiastically took on the challenge again, as we believe this to be a great opportunity, alongside listening to customers and investing in research, to continuously drive our security products to excellence and protect our customers.

This year, for the first time, we were happy to answer the community call from MITRE, alongside other security vendors, to contribute unique threat intelligence and research content about APT29, as well as in evolving the evaluation based on the experience and feedback from last year, yielding a very collaborative and productive process.

Thank you to MITRE and our customers and partners for your partnership in helping us deliver more visibility and automated protection, detection, response, and prevention of threats for our customers.

– Moti Gindi, CVP, Microsoft Threat Protection

The post MITRE ATT&CK APT 29 evaluation proves Microsoft Threat Protection provides deeper end to end view of advanced threats appeared first on Microsoft Security.

NERC CIP Compliance in Azure vs. Azure Government cloud

April 20th, 2020 No comments

As discussed in my last blog post on North American Electric Reliability Corporation—Critical Infrastructure Protection (NERC CIP) Compliance in Azure, U.S. and Canadian utilities are now free to benefit from cloud computing in Azure for many NERC CIP workloads. Machine learning, multiple data replicas across fault domains, active failover, quick deployment and pay for use benefits are now available for these NERC CIP workloads.

Good candidates include a range of predictive maintenance, asset management, planning, modelling and historian systems as well as evidence collection systems for NERC CIP compliance itself.

It’s often asked whether a utility must use Azure Government Cloud (“Azure Gov”) as opposed to Azure public cloud (“Azure”) to host their NERC CIP compliant workloads. The short answer is that both are an option.  There are several factors that bear on the choice.

U.S. utilities can use Azure and Azure Gov for NERC CIP workloads. Canadian utilities can use Azure.

There are some important differences that should be understood when choosing an Azure cloud for deployment.

Azure and Azure Gov are separate clouds, physically isolated from each other. They both offer U.S. regions. All data replication for both can be kept within the U.S.

Azure also offers two Canadian regions, one in Ontario and one in Quebec, with data stored exclusively in Canada.

Azure Gov is only available to verified U.S. federal, state, and local government entities, some partners and contractors. It has four regions: Virginia, Iowa, Arizona and Texas. Azure Gov is available to U.S.-based NERC Registered Entities.

We are working toward feature parity between Azure and Azure Gov. A comparison is provided here.

The security controls are the same for Azure and Azure Gov clouds. All U.S. Azure regions are now approved for FedRAMP High impact level.

Azure Gov provides additional assurances regarding U.S. government-specific background screening requirements. One of these is verification that Azure Gov operations personnel with potential access to Customer Data are U.S. persons. Azure Gov can also support customers subject to certain export controls laws and regulations. While not a NERC CIP requirement, this can impact U.S. utility customers.

Azure Table 1

Under NERC CIP-004, utilities are required to conduct background checks.

Microsoft U.S. Employee Background Screening

Microsoft US Employee Background Screening

Microsoft’s background checks for both Azure and Azure Gov exceed the requirements of CIP 004.

NERC is not prescriptive on the background check that a utility must conduct as part of its compliance policies.

A utility may have a U.S. citizenship requirement as part of its CIP-004 compliance policy which covers both its own staff and the operators of its cloud infrastructure. Thus, if a utility needs U.S. citizens operating its Microsoft cloud in order to meet its own CIP-004 compliance standards, it can use Azure Gov for this purpose.

A utility may have nuclear assets that subject it to U.S. Department of Energy export control requirements (DOE 10 CFR Part 810) on Unclassified Controlled Nuclear Information. This rule covers more than the export of nuclear technology outside the United States, it also covers the transmission of protected information or technology to foreign persons inside the U.S. (e.g., employees of the utility and employees of the utility’s cloud provider).

Since access to protected information could be necessary to facilitate a support request, this should be considered if the customer has DOE export control obligations. Though the NERC assets themselves may be non-nuclear, the utility’s policy set may extend to its entire fleet and workforce regardless of generation technology. Azure Gov, which requires that all its operators be U.S. citizens, would facilitate this requirement.

Azure makes the operational advantages, increased security and cost savings of the cloud available for many NERC CIP workloads. Microsoft provides Azure and Azure Gov clouds for our customers’ specific needs.  Microsoft continues its work with regulators to make our cloud available for more workloads, including those requiring compliance with NERC CIP standards. The utility (Registered Entity) is ultimately responsible for NERC CIP compliance and Microsoft continues to work with customers and partners to simplify the efforts to prepare for audits.

Thanks to Larry Cochrane and Stevan Vidich for their leadership on Microsoft’s NERC CIP compliance viewpoint and architecture. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. To learn more about our Security solutions visit our website.

 

(c) 2020 Microsoft Corporation. All rights reserved. This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. This document is not intended to communicate legal advice or a legal or regulatory compliance opinion. Each customer’s situation is unique, and legal and regulatory compliance should be assessed in consultation with their legal counsel.

The post NERC CIP Compliance in Azure vs. Azure Government cloud appeared first on Microsoft Security.

Secure the software development lifecycle with machine learning

April 16th, 2020 No comments

Every day, software developers stare down a long list of features and bugs that need to be addressed. Security professionals try to help by using automated tools to prioritize security bugs, but too often, engineers waste time on false positives or miss a critical security vulnerability that has been misclassified. To tackle this problem data science and security teams came together to explore how machine learning could help. We discovered that by pairing machine learning models with security experts, we can significantly improve the identification and classification of security bugs.

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time. This is an overview of how we did it.

Qualifying data for supervised learning

Our goal was to build a machine learning system that classifies bugs as security/non-security and critical/non-critical with a level of accuracy that is as close as possible to that of a security expert. To accomplish this, we needed a high-volume of good data. In supervised learning, machine learning models learn how to classify data from pre-labeled data. We planned to feed our model lots of bugs that are labeled security and others that aren’t labeled security. Once the model was trained, it would be able to use what it learned to label data that was not pre-classified. To confirm that we had the right data to effectively train the model, we answered four questions:

  • Is there enough data? Not only do we need a high volume of data, we also need data that is general enough and not fitted to a small number of examples.
  • How good is the data? If the data is noisy it means that we can’t trust that every pair of data and label is teaching the model the truth. However, data from the wild is likely to be imperfect. We looked for systemic problems rather than trying to get it perfect.
  • Are there data usage restrictions? Are there reasons, such as privacy regulations, that we can’t use the data?
  • Can data be generated in a lab? If we can generate data in a lab or some other simulated environment, we can overcome other issues with the data.

Our evaluation gave us confidence that we had enough good data to design the process and build the model.

Data science + security subject matter expertise

Our classification system needs to perform like a security expert, which means the subject matter expert is as important to the process as the data scientist. To meet our goal, security experts approved training data before we fed it to the machine learning model. We used statistical sampling to provide the security experts a manageable amount of data to review. Once the model was working, we brought the security experts back in to evaluate the model in production.

With a process defined, we could design the model. To classify bugs accurately, we used a two-step machine learning model operation. First the model learned how to classify security and non-security bugs. In the second step the model applied severity labels—critical, important, low-impact—to the security bugs.

Our approach in action

Building an accurate model is an iterative process that requires strong collaboration between subject matter experts and data scientists:

Data collection: The project starts with data science. We identity all the data types and sources and evaluate its quality.

Data curation and approval: Once the data scientist has identified viable data, the security expert reviews the data and confirms the labels are correct.

Modeling and evaluation: Data scientists select a data modeling technique, train the model, and evaluate model performance.

Evaluation of model in production: Security experts evaluate the model in production by monitoring the average number of bugs and manually reviewing a random sampling of bugs.

The process didn’t end once we had a model that worked. To make sure our bug modeling system keeps pace with the ever-evolving products at Microsoft, we conduct automated re-training. The data is still approved by a security expert before the model is retrained, and we continuously monitor the number of bugs generated in production.

More to come

By applying machine learning to our data, we accurately classify which work items are security bugs 99 percent of the time. The model is also 97 percent accurate at labeling critical and non-critical security bugs. This level of accuracy gives us confidence that we are catching more security vulnerabilities before they are exploited.

In the coming months, we will open source our methodology to GitHub.

In the meantime, you can read a published academic paper, Identifying security bug reports based solely on report titles and noisy data, for more details. Or download a short paper that was featured at Grace Hopper Celebration 2019.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. To learn more about our Security solutions visit our website.

The post Secure the software development lifecycle with machine learning appeared first on Microsoft Security.