Archive

Archive for May, 2020

Managing cybersecurity like a business risks: Part 1—Modeling opportunities and threats

May 28th, 2020 No comments

In recent years, cybersecurity has been elevated to a C-suite and board-level concern. This is appropriate given the stakes. Data breaches can have significant impact on a company’s reputation and profits. But, although businesses now consider cyberattacks a business risk, management of cyber risks is still siloed in technology and often not assessed in terms of other business drivers. To properly manage cybersecurity as a business risk, we need to rethink how we define and report on them.

The blog series, “Managing cybersecurity like a business risk,” will dig into how to update the cybersecurity risk definition, reporting, and management to align with business drivers. In today’s post, I’ll talk about why we need to model both opportunities as well as threats when we evaluate cyber risks. In future blogs, I’ll dig into some reporting tools that businesses can use to keep business leaders informed.

Digital transformation brings both opportunities and threats

Technology innovations such as artificial intelligence (AI), the cloud, and the internet of things (IoT) have disrupted many industries. Much of this disruption has been positive for businesses and consumers alike. Organizations can better tailor products and services to targeted segments of the population, and businesses have seized on these opportunities to create new business categories or reinvent old ones.

These same technologies have also introduced new threats. Legacy companies risk losing loyal customers by exploiting new markets. Digital transformation can result in a financial loss if big bets don’t pay off. And of course, as those of us in cybersecurity know well, cybercriminals and other adversaries have exploited the expanded attack surface and the mountains of data we collect.

The threats and opportunities of technology decisions are intertwined, and increasingly they impact not just operations but the core business. Too often decisions about digital transformation are made without evaluating cyber risks. Security is brought in at the very end to protect assets that are exposed. Cyber risks are typically managed from a standpoint of loss aversion without accounting for the possible gains of new opportunities. This approach can result in companies being either too cautious or not cautious enough. To maximize digital transformation opportunities, companies need good information that helps them take calculated risks.

It starts with a SWOT analysis

Threats and opportunities are external forces that may be factors for a company and all its competitors. One way to determine how your company should respond is by also understanding your weaknesses and strengths, which are internal factors.

  • Strengths: Characteristics or aspects of the organization or product that give it a competitive edge.
  • Weaknesses: Characteristics or aspects of the organization or product that puts it at a disadvantage compared to the competition.
  • Opportunities: Market conditions that could be exploited for benefit.
  • Threats: Market conditions that could cause damage or harm.

To crystallize these concepts, let’s consider a hypothetical brick and mortar retailer in the U.K. that sells stylish maternity clothes at an affordable price. In Europe, online retail is big business. Companies like ASOS and Zalando are disrupting traditional fashion. If we apply a SWOT analysis to them, it might look something like this.

  • Strength: Stylish maternity clothes sold at an affordable price, loyal referral-based clientele.
  • Weakness: Only available through brick and mortar stores, lack technology infrastructure to quickly go online, and lack security controls.
  • Opportunity: There is a market for these clothes beyond the U.K.
  • Threats: Retailers are a target for cyberattacks, customers trends indicate they will shop less frequently at brick and mortar stores in the future.

For this company, there isn’t an obvious choice. The retailer needs to figure out a way to maintain the loyalty of its current customers while preparing for a world where in-person shopping decreases. Ideally the company can use its strengths to overcome its weaknesses and confront threats. For example, the company’s loyal clients that already refer a lot of business could be incented to refer business via online channels to grow business. The company may also recognize that building security controls into an online business from the ground up is critical and take advantage of its steady customer base to buy some time and do it right.

Threat modeling and opportunity modeling paired together can help better define the potential gains and losses of different approaches.

Opportunity and threat modeling

Many cybersecurity professionals are familiar with threat modeling, which essentially poses the following questions, as recommended by the Electronic Frontier Foundation.

  • What do you want to protect?
  • Who do you want to protect it from?
  • How likely is it that you will need to protect it?
  • How bad are the consequences if you fail?
  • How much trouble are you willing to go through in order to try to prevent those?

But once we’ve begun to consider not just the threats but the opportunities available in each business decision, it becomes clear that this approach misses half the equation. Missed opportunity is a risk that isn’t captured in threat modeling. This is where opportunity modeling becomes valuable. Some of my thinking around opportunity modeling was inspired by a talk by John Sherwood at SABSA, and he suggested the following questions to effectively model opportunity:

  • What is the value of the asset you want to protect?
  • What is the potential gain of the opportunity?
  • How likely is it that the opportunity will be realized?
  • How likely is it that a strength be exploited?

This gives us a framework to consider the risk from both a threat and opportunity standpoint. Our hypothetical retailer knows it wants to protect the revenue generated by the current customers and referral model, which is the first question on each model. The other questions help quantify the potential loss if threats materialize and the potential gains of opportunities are realized. The company can use this information to better understand the ratio of risk to reward.

It’s never easy to make big decisions in light of potential risks, but when decisions are informed by considering both the potential gains and potential losses, you can also better define a risk management strategy, including the types of controls you will need to mitigate your risk.

In my next post in the “Managing cybersecurity like a business risk” series, I’ll review some qualitative and quantitative tools you can use to manage risk.

Read more about risk management from SABSA.  To learn more about Microsoft security solutions visit our website. In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Managing cybersecurity like a business risks: Part 1—Modeling opportunities and threats appeared first on Microsoft Security.

4 identity partnerships to help drive better security

May 28th, 2020 No comments

At Microsoft, we are committed to driving innovation for our partnerships within the identity ecosystem. Together, we are enabling our customers, who live and work in a heterogenous world, to get secure and remote access to the apps and resources they need. In this blog, we’d like to highlight how partners can help enable secure remote access to any app, access to on-prem and legacy apps, as well as how to secure seamless access via passwordless apps. We will also touch on how you can increase security visibility and insights by leveraging Azure Active Directory (Azure AD) Identity Protection APIs.

Secure remote access to cloud apps

As organizations adopt remote work strategies in today’s environment, it’s important their workforce has access to all the applications they need. With the Azure AD app gallery, we work closely with independent software vendors (ISV) to make it easy for organizations and their employees and customers to connect to and protect the applications they use. The Azure AD app gallery consists of thousands of applications that make it easy for admins to set up single sign-on (SSO) or user provisioning for their employees and customers. You can find popular collaboration applications to work remotely such Cisco Webex, Zoom, and Workplace from Facebook or security focused applications such as Mimecast, and Jamf. And if you don’t find the application your organization needs, you can always make a nomination here.

The Azure AD Gallery

The Azure AD Gallery.

Secure hybrid access to your on-premises and legacy apps

As organizations enable their employees to work from home, maintaining remote access to all company apps, including those on-premises and legacy, from any location and any device, is key to safeguard the productivity of their workforce. Azure AD offers several integrations for securing on-premises SaaS applications like SAP NetWeaver, SAP Fiori systems, Oracle PeopleSoft and E-Business Suite, and Atlassian JIRA and Confluence through the Azure AD App Gallery. For customers who are using Akamai Enterprise Application Access (EAA), Citrix Application Delivery Controller (ADC), F5 BIG-IP Access Policy Manager (APM), or Zscaler Private Access (ZPA), Microsoft has partnerships to provide remote access securely and help extend policies and controls that allow businesses to manage and govern on-premises legacy apps from Azure AD without having to change how the apps work.

Our integration with Zscaler allows a company’s business partners, such as suppliers and vendors, to securely access legacy, on-premises applications through the Zscaler B2B portal.

Integration with Zscaler

Go passwordless with FIDO2 security keys

Passwordless methods of authentication should be part of everyone’s future. Currently, Microsoft has over 100-million active passwordless end-users across consumer and enterprise customers. These passwordless options include Windows Hello for Business, Authenticator app, and FIDO2 security keys. Why are passwords falling out of favor? For them to be effective, passwords must have several characteristics, including being unique to every site. Trying to remember them all can frustrate end-users and lead to poor password hygiene.

Since Microsoft announced the public preview of Azure AD support for FIDO2 security keys in hybrid environments earlier this year, I’ve seen more organizations, especially with regulatory requirements, start to adopt FIDO2 security keys. This is another important area where we’ve worked with many FIDO2 security key partners who are helping our customers to go passwordless smoothly.

Partner logos

Increase security visibility and insights by leveraging Azure AD Identity Protection APIs

We know from our partners that they would like to leverage insights from the Azure AD Identity Protection with their security tools such as security information event management (SIEM) or network security. The end goal is to help them leverage all the security tools they have in an integrated way. Currently, we have the Azure AD Identity Protection API in preview that our ISVs leverage. For example, RSA announced at their 2020 conference that they are now leveraging our signals to better defend their customers.

We’re looking forward to working with many partners to complete these integrations.

If you haven’t taken advantage of any of these types of solutions, I recommend you try them out today and let us know what you think. If you have product partnership ideas with Azure AD, feel free to connect with me via LinkedIn or Twitter.

The post 4 identity partnerships to help drive better security appeared first on Microsoft Security.

Security baseline (DRAFT): Windows 10 and Windows Server, version 2004

May 27th, 2020 No comments

Microsoft is pleased to announce the draft release of the security configuration baseline settings for Windows 10 and Windows Server, version 2004.


Please download the draft baseline (attached to this post), evaluate the proposed baselines, and leave us your comments/feedback below.


This Windows 10 feature update brings very few new policy settings, which we list in the accompanying documentation. Only one new policy meets the criteria for inclusion in the security baseline (described below), but there are two other policies we want to highlight for your consideration.


LDAP channel binding requirements


In the Windows Server, version 1809 Domain Controller baseline, we created and enabled a new custom MS Security Guide setting called Extended Protection for LDAP Authentication (Domain Controllers only) based on the values provided here. This setting is now provided as part of Windows and no longer requires a custom ADMX. An announcement was made in March of this year and now all supported Active Directory domain controllers can configure this policy. The value will remain the same in our baseline, but the setting has moved to the new location. We are deprecating our custom setting. The new setting location is: Security Settings\Local Policies\Security Options\Domain controller: LDAP server channel binding token requirements.


Note: this new policy requires the March 10, 2020 security update. (We assume that, as security conscious baselines users, you are patching!) Details of that patch are here.


Microsoft Defender Antivirus File Hash


Microsoft Defender Antivirus continues to enable new features to better protect consumers and enterprises alike. As part of this journey we have added a new setting to compute file hashes for every executable file that is scanned, if it wasn’t previously computed. You can find this new setting here: Computer Configurations\Administrative Templates\Windows Components\Microsoft Defender Antivirus\MpEngine\Enable file hash computation feature.


You should consider using this feature to improve blocking for custom indicators in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP). This new feature forces the engine to compute the full file hash for all executable files that are scanned. This can have a performance cost but we are trying to keep it to a minimum by only generating hashes on first sight.  The scenarios where performance might be impacted would be new executable content being generated (for example, developers) or where you frequently install or update applications.


Because this setting is less helpful for customers who are not using Microsoft Defender ATP, we have not added it to the baseline, but we felt it was potentially impactful enough to call out. If you chose to enable this setting, we recommend throttling the deployment to ensure you measure the impact on your users’ machines.


Account Password Length


In the Windows 10 1903 security baselines we announced the removal of the account password expiration policy. We continue to invest in improving this experience. With Windows 10, version 2004, two new security settings have been added for password policies: ‘Minimum password length audit’ and ‘Relax minimum password length limits’. These new settings can be found under Account Policies\Password Policy.


Previously, you could not require passwords/phrase greater than 14 characters. Now you can! Being able to require a length of more than 14 characters (maximum of 128) can help better secure your environment until you can fully implement a multi-factor authentication strategy. Our vision remains unchanged in achieving a password-less future, but we also recognize that this takes time to fully implement across both your users and your existing applications and systems.


You should be cautious with this new setting because it can potentially cause compatibility issues with existing systems and processes. That’s why we introduced the ‘Minimum password length audit’ setting, so you can see what will happen if you increase your password/phrase length. With auditing you can set your limit anywhere between 1 and 128. Three new events are also created as part of this setting and will be logged as new SAM events in the System event log: one event for awareness, one for configuration, and one for error.


This setting will not be added to the baseline as the minimum password length should be audited before broad enforcement due to the risk of application compatibility issues. However, we urge organizations to consider these two settings. Additional details about these new settings will be found here, once the new article get published in the coming days.


As a reminder, length alone is not always the best predictor of password strength, so we strongly recommend considering solutions such as the on-premises Azure Active Directory Password Protection which does sub-string matching using a dictionary of known weak terms, and rejects passwords that don’t meet a certain score.


Tooling


Finally, we do have some enhancements for LGPO and Policy Analyzer coming. We will go into more details on these enhancements in a future blog post!


Baseline criteria


We follow a streamlined and efficient approach to baseline definition when compared with the baselines we published before Windows 10. The foundation of that approach is essentially:



  • The baselines are designed for well-managed, security-conscious organizations in which standard end users do not have administrative rights.

  • A baseline enforces a setting only if it mitigates a contemporary security threat and does not cause operational issues that are worse than the risks they mitigate.

  • A baseline enforces a default only if it is otherwise likely to be set to an insecure state by an authorized user:



    • If a non-administrator can set an insecure state, enforce the default.

    • If setting an insecure state requires administrative rights, enforce the default only if it is likely that a misinformed administrator will otherwise choose poorly.




For further illustration, see the “Why aren’t we enforcing more defaults?” section in this blog post.


As always, please let us know your thoughts by commenting on this post.

Categories: Uncategorized Tags:

Security baseline (DRAFT) for Windows 10 and Windows Server v2004

May 27th, 2020 No comments

Microsoft is pleased to announce the draft release of the security configuration baseline settings for Windows 10 and Windows Server version 2004.


 


Please download the draft baseline (attached to this post), evaluate the proposed baselines, and leave us your comments/feedback below.


 


This Windows 10 feature update brings very few new policy settings, which we list in the accompanying documentation. Only one new policy meets the criteria for inclusion in the security baseline (described below), but there are two other policies we want to highlight for your consideration.


LDAP Channel Binding Requirements


In the Windows Server version 1809 Domain Controller baseline we created and enabled a new custom MS Security Guide setting called Extended Protection for LDAP Authentication (Domain Controllers only) based on the values provided here. This setting is now provided as part of Windows and no longer requires a custom ADMX. An announcement was made in March of this year and now all supported Active Directory domain controllers can configure this policy. The value will remain the same in our baseline, but the setting has moved to the new location. We are deprecating our custom setting. The new setting location is: Security Settings\Local Policies\Security Options\Domain controller: LDAP server channel binding token requirements.


 


Note: this new policy requires the March 10, 2020 security update. (We assume that, as security conscious baselines users, you are patching!) Details of that patch are here.


Microsoft Defender Antivirus File Hash


Microsoft Defender Antivirus continues to enable new features to better protect consumers and enterprises alike. As part of this journey we have added a new setting to compute file hashes for every executable file that is scanned, if it wasn’t previously computed. You can find this new setting here: Computer Configurations\Administrative Templates\Windows Components\Microsoft Defender Antivirus\MpEngine\Enable file hash computation feature.


 


You should consider using this feature to improve blocking for custom indicators in Microsoft Defender Advanced Threat Protection (MDATP). This new feature forces the engine to compute the full file hash for all executable files that are scanned. This can have a performance cost but we are trying to keep it to a minimum by only generating hashes on first sight.  The scenarios where performance might be impacted would be new executable content being generated (for example, developers) or where you frequently install or update applications.


 


Because this setting is less helpful for customers who are not using MDATP, we have not added it to the baseline, but we felt it was potentially impactful enough to call out. If you chose to enable this setting, we recommend throttling the deployment to ensure you measure the impact on your users’ machines.


Account Password Length


In the Windows 10 1903 security baselines we announced the removal of the account password expiration policy. We continue to invest in improving this experience. With Windows 10 2004, two new security settings have been added for password policies: ‘Minimum password length audit’ and ‘Relax minimum password length limits’. These new settings can be found under Account Policies\Password Policy.


 


Previously, you could not require passwords/phrase greater than 14 characters. Now you can! Being able to require a length of more than 14 characters (maximum of 128) can help better secure your environment until you can fully implement a multi-factor authentication strategy. Our vision remains unchanged in achieving a password-less future, but we also recognize that this takes time to fully implement across both your users and your existing applications and systems.


 


You should be cautious with this new setting because it can potentially cause compatibility issues with existing systems and processes. That’s why we introduced the ‘Minimum password length audit’ setting, so you can see what will happen if you increase your password/phrase length. With auditing you can set your limit anywhere between 1 and 128. Three new events are also created as part of this setting and will be logged as new SAM events in the System event log: one event for awareness, one for configuration, and one for error.


 


This setting will not be added to the baseline as the minimum password length should be audited before broad enforcement due to the risk of application compatibility issues. However, we urge organizations to consider these two settings. Additional details about these new settings will be found here, once the new article get published in the coming days.


 


As a reminder, length alone is not always the best predictor of password strength, so we strongly recommend considering solutions such as the on-premise Azure Active Directory Password Protection which does sub-string matching using a dictionary of known weak terms, and rejects passwords that don’t meet a certain score.


Tooling


Finally, we do have some enhancements for LGPO and Policy Analyzer coming. We will go into more details on these enhancements in a future blog post!


Baseline criteria


We follow a streamlined and efficient approach to baseline definition when compared with the baselines we published before Windows 10. The foundation of that approach is essentially:



  • The baselines are designed for well-managed, security-conscious organizations in which standard end users do not have administrative rights.

  • A baseline enforces a setting only if it mitigates a contemporary security threat and does not cause operational issues that are worse than the risks they mitigate.

  • A baseline enforces a default only if it is otherwise likely to be set to an insecure state by an authorized user:


    • If a non-administrator can set an insecure state, enforce the default.

    • If setting an insecure state requires administrative rights, enforce the default only if it is likely that a misinformed administrator will otherwise choose poorly.



For further illustration, see the “Why aren’t we enforcing more defaults?” section in this blog post.


 


As always, please let us know your thoughts by commenting on this post.

Categories: Uncategorized Tags:

Zero Trust Deployment Guide for devices

May 26th, 2020 No comments

The modern enterprise has an incredible diversity of endpoints accessing their data. This creates a massive attack surface, and as a result, endpoints can easily become the weakest link in your Zero Trust security strategy.

Whether a device is a personally owned BYOD device or a corporate-owned and fully managed device, we want to have visibility into the endpoints accessing our network, and ensure we’re only allowing healthy and compliant devices to access corporate resources. Likewise, we are concerned about the health and trustworthiness of mobile and desktop apps that run on those endpoints. We want to ensure those apps are also healthy and compliant and that they prevent corporate data from leaking to consumer apps or services through malicious intent or accidental means.

Get visibility into device health and compliance

Gaining visibility into the endpoints accessing your corporate resources is the first step in your Zero Trust device strategy. Typically, companies are proactive in protecting PCs from vulnerabilities and attacks, while mobile devices often go unmonitored and without protections. To help limit risk exposure, we need to monitor every endpoint to ensure it has a trusted identity, has security policies applied, and the risk level for things like malware or data exfiltration has been measured, remediated, or deemed acceptable. For example, if a personal device is jailbroken, we can block access to ensure that enterprise applications are not exposed to known vulnerabilities.

  1. To ensure you have a trusted identity for an endpoint, register your devices with Azure Active Directory (Azure AD). Devices registered in Azure AD can be managed using tools like Microsoft Endpoint Manager, Microsoft Intune, System Center Configuration Manager, Group Policy (hybrid Azure AD join), or other supported third-party tools (using the Intune Compliance API + Intune license). Once you’ve configured your policy, share the following guidance to help users get their devices registered—new Windows 10 devices, existing Windows 10 devices, and personal devices.
  2. Once we have identities for all the devices accessing corporate resources, we want to ensure that they meet the minimum security requirements set by your organization before access is granted. With Microsoft Intune, we can set compliance rules for devices before granting access to corporate resources. We also recommend setting remediation actions for noncompliant devices, such as blocking a noncompliant device or offering the user a grace period to get compliant.

Restricting access from vulnerable and compromised devices

Once we know the health and compliance status of an endpoint through Intune enrollment, we can use Azure AD Conditional Access to enforce more granular, risk-based access policies. For example, we can ensure that no vulnerable devices (like devices with malware) are allowed access until remediated, or ensure logins from unmanaged devices only receive limited access to corporate resources, and so on.

  1. To get started, we recommend only allowing access to your cloud apps from Intune-managed, domain-joined, and/or compliant devices. These are baseline security requirements that every device will have to meet before access is granted.
  2. Next, we can configure device-based Conditional Access policies in Intune to enforce restrictions based on device health and compliance. This will allow us to enforce more granular access decisions and fine-tune the Conditional Access policies based on your organization’s risk appetite. For example, we might want to exclude certain device platforms from accessing specific apps.
  3. Finally, we want to ensure that your endpoints and apps are protected from malicious threats. This will help ensure your data is better-protected and users are at less risk of getting denied access due to device health and/or compliance issues. We can integrate data from Microsoft Defender Advanced Threat Protection (ATP), or other Mobile Threat Defense (MTD) vendors, as an information source for device compliance policies and device Conditional Access rules. Options below:

Enforcing security policies on mobile devices and apps

We have two options for enforcing security policies on mobile devices: Intune Mobile Device Management (MDM) and Intune Mobile Application Management (MAM). In both cases, once data access is granted, we want to control what the user does with the data. For example, if a user accesses a document with a corporate identity, we want to prevent that document from being saved in an unprotected consumer storage location or from being shared with a consumer communication or chat app. With Intune MAM policies in place, they can only transfer or copy data within trusted apps such as Office 365 or Adobe Acrobat Reader, and only save it to trusted locations such as OneDrive or SharePoint.

Intune ensures that the device configuration aspects of the endpoint are centrally managed and controlled. Device management through Intune enables endpoint provisioning, configuration, automatic updates, device wipe, or other remote actions. Device management requires the endpoint to be enrolled with an organizational account and allows for greater control over things like disk encryption, camera usage, network connectivity, certificate deployment, and so on.

Mobile Device Management (MDM)

  1. First, using Intune, let’s apply Microsoft’s recommended security settings to Windows 10 devices to protect corporate data (Windows 10 1809 or later required).
  2. Ensure your devices are patched and up to date using Intune—check out our guidance for Windows 10 and iOS.
  3. Finally, we recommend ensuring your devices are encrypted to protect data at rest. Intune can manage a device’s built-in disk encryption across both macOS and Windows 10.

Meanwhile, Intune MAM is concerned with management of the mobile and desktop apps that run on endpoints. Where user privacy is a higher priority, or the device is not owned by the company, app management makes it possible to apply security controls (such as Intune app protection policies) at the app level on non-enrolled devices. The organization can ensure that only apps that comply with their security controls, and running on approved devices, can be used to access emails or files or browse the web.

With Intune, MAM is possible for both managed and unmanaged devices. For example, a user’s personal phone (which is not MDM-enrolled) may have apps that receive Intune app protection policies to contain and protect corporate data after it has been accessed. Those same app protection policies can be applied to apps on a corporate-owned and enrolled tablet. In that case, the app-level protections complement the device-level protections. If the device is also managed and enrolled with Intune MDM, you can choose not to require a separate app-level PIN if a device-level PIN is set, as part of the Intune MAM policy configuration.

Mobile Application Management (MAM)

  1. To protect your corporate data at the application level, configure Intune MAM policies for corporate apps. MAM policies offer several ways to control access to your organizational data from within apps:
    • Configure data relocation policies like save-as restrictions for saving organization data or restrict actions like cut, copy, and paste outside of organizational apps.
    • Configure access policy settings like requiring simple PIN for access or blocking managed apps from running on jailbroken or rooted devices.
    • Configure automatic selective wipe of corporate data for noncompliant devices using MAM conditional launch actions.
    • If needed, create exceptions to the MAM data transfer policy to and from approved third-party apps.
  2. Next, we want to set up app-based Conditional Access policies to ensure only approved corporate apps access corporate data.
  3. Finally, using app configuration (appconfig) policies, Intune can help eliminate app setup complexity or issues, make it easier for end users to get going, and ensure better consistency in your security policies. Check out our guidance on assigning configuration settings.

Conclusion

We hope the above helps you deploy and successfully incorporate devices into your Zero Trust strategy. Make sure to check out the other deployment guides in the series by following the Microsoft Security blog. For more information on Microsoft Security Solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Zero Trust Deployment Guide for devices appeared first on Microsoft Security.

Zero Trust and its role in securing the new normal

May 26th, 2020 No comments

As the global crisis around COVID-19 continues, security teams have been forced to adapt to a rapidly evolving security landscape. Schools, businesses, and healthcare organizations are all getting work done from home on a variety of devices and locations, extending the potential security attack surface.

While we continue to help our customers enable secure access to apps in this “new normal,” we’re also thinking about the road ahead and how there are still many organizations who will need to adapt their security model to support work life. This is especially important given that bad actors are using network access solutions like VPN as a trojan horse to deploy ransomware and the number of COVID-19 themed attacks have increased and evolved.

Microsoft and Zscaler have partnered to provide a glimpse into how security will change in a post-COVID-19 world.

Accelerating to Zero Trust

“We’ve seen two years’ worth of digital transformation in two months.”
—Satya Nadella, CEO, Microsoft

With the bulk of end users now working remotely, organizations were forced to consider alternate ways of achieving modern security controls. Legacy network architectures route all remote traffic through a central corporate datacenter are suddenly under enormous strain due to massive demand for remote work and rigid appliance capacity limitations. This creates latency for users, impacting productivity and requires additional appliances that can take 30, 60, or even 90 days just to be shipped out.

To avoid these challenges many organizations were able to enable work from home by transitioning their existing network infrastructure and capabilities with a Zero Trust security framework instead.

The Zero Trust framework empowers organizations to limit access to specific apps and resources only to the authorized users who are allowed to access them. The integrations between Microsoft Azure Active Directory (Azure AD) and Zscaler Private Access embody this framework.

For the companies who already had proof of concept underway for their Zero Trust journey, COVID-19 served as an accelerator, moving up the timelines for adoption. The ability to separate application access from network access, and secure application access based on identity and user context, such as date/time, geolocation, and device posture, was critical for IT’s ability to enable remote work. Cloud delivered technologies such as Azure AD and Zscaler Private Access (ZPA) have helped ensure fast deployment, scalability, and seamless experiences for remote users.

Both Microsoft and Zscaler anticipate that if not already moving toward a Zero Trust model, organizations will accelerate this transition and start to adopt one.

Securing flexible work going forward

While some organizations have had to support remote workers in the past, many are now forced to make the shift from a technical and cultural standpoint. As social distancing restrictions start to loosen, instead of remote everything we’ll begin to see organizations adopt more flexible work arrangements for their employees. Regardless of where employees are, they’ll need to be able to securely access any application, including the mission-critical “crown jewel” apps that may still be using legacy authentication protocols like HTTP or LDAP and on-premises. To simplify the management of protecting access to apps from a now flexible working style, there should be a single policy per user that can be used to provide access to an application, whether they are remote or at the headquarters

Zscaler Private Access and Azure AD help organizations enable single sign-on and enforce Conditional Access policies to ensure authorized users can securely access specifically the apps they need. This includes their mission-critical applications that run on-premises and may have SOC-2 and ISO27001 compliance needs.

Today, the combination of ZPA and Azure AD are already helping organizations adopt flexible work arrangements to ensure seamless and secure access to their applications.

Secure access with Zscaler and Microsoft

Remote onboarding or offboarding for a distributed workforce

With remote and flexible work arrangements becoming a norm, organizations will need to consider how to best onboard or offboard a distributed workforce and ensure the right access can be granted when employees join, change or leave roles. To minimize disruption, organizations will need to enable and secure Bring Your Own Devices (BYOD) or leverage solutions like Windows Autopilot that can help users set up new devices without any IT involvement.

To ensure employees can access applications on day one, automating the provisioning of user accounts to applications will be critical for productivity. The SCIM 2.0 standard, adopted by both Microsoft and Zscaler, can help automate simple actions, such as creating or updating users, adding users to groups, or deprovisioning users into applications. Azure AD user provisioning can help manage end-to-end identity lifecycle and automate policy-based provisioning and deprovisioning of user accounts for applications. The ZPA + Azure AD SCIM 2.0 configuration guide shows how this works.

Powering security going forward

Security and IT teams are already under strain with this new environment and adding an impending economic downturn into the equation means they’ll need to do more with less. The responsibility of selecting the right technology falls to the security leaders. Together, Microsoft and Zscaler can help deliver secure access to applications and data on all the devices accessing your network, while empowering employees with simpler, more productive experiences. This is the power of cloud and some of the industry’s deepest level of integrations. We look forward to working with on what your security might look like after COVID-19.

Stay safe.

For more information on Microsoft Zero Trust, visit our website: Zero Trust security framework. Learn more about our guidance related to COVID-19 here and bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Zero Trust and its role in securing the new normal appeared first on Microsoft Security.

Security baseline for Microsoft Edge v83

May 26th, 2020 No comments

Microsoft is pleased to announce the enterprise-ready release of the security baseline for v83 of Microsoft Edge.


 


We have reviewed the new settings in version 83 of Microsoft Edge and determined that no new security settings are required. The settings recommended in the version 80 baseline will continue to be the security baseline for version 83! This means we will not be releasing our typical package. We continue to welcome feedback through the Baselines Discussion site.


 


Version 83 of Microsoft Edge adds 19 new computer- and user-based settings. There are now 311 enforceable Computer Configuration policy settings and 286 User Configuration policy settings. Using our streamlined approach, our baseline remains at 12 Group Policy settings. We have attached a spreadsheet with the new settings to make it easier for you to find them.


 


The version 80 package continues to be available as part of the Security Compliance Toolkit. Like all our baseline packages, this package includes:



  • Importable GPOs

  • A script to apply the GPOs to local policy

  • A script to import the GPOs into Active Directory Group Policy

  • A spreadsheet documenting all recommended settings in spreadsheet form (minus the version 83 settings that are attached to this blog)

  • Policy Analyzer rules

  • GP Reports


In case you aren’t aware, all the available settings for Microsoft Edge are documented here.  Additionally, Microsoft Edge Update settings are documented here.


 


One ask to the community, we are trying to determine how often we should update the baseline package on the Download Center for Microsoft Edge if new security settings have not been added and would like your feedback.  Do you think we should update it quarterly, just to keep it fresh, or only when new settings get added?

Categories: Uncategorized Tags:

Build support for open source in your organization

May 21st, 2020 No comments

Have you ever stared at the same lines of code for hours only to have a coworker identify a bug after just a quick glance? That’s the power of community! Open source software development is guided by the philosophy that a diverse community will produce higher quality code by allowing anyone to review and contribute. Individuals and large enterprises, like Microsoft, have embraced open source to engage people who can help make solutions better. However, not all open source projects are equivalent in quality or support. And, when it comes to security tools, many organizations hesitate to adopt open source. So how should you approach selecting and onboarding the right open source solutions for your organization? Why don’t we ask the community!

Earlier this year at the RSA 2020 Conference, I had the pleasure of sitting on the panel, Open Source: Promise, Perils, and the Path Ahead. Joining me were Inigo Merino, CEO of Cienaga Systems; Dr. Kelley Misata, CEO, Sightline Security; and Lenny Zeltser, CISO, Axonius. In addition to her role at Sightline Security, Kelley also serves as the President and Executive Director of the Open Information Security Foundation (OISF), which builds Suricata, an open source threat detection engine. Lenny created and maintains a Linux distribution called REMnux that organizations use for malware analysis. Ed Moyle, a Partner at SecurityCurve, served as the moderator. Today I’ll share our collective advice for selecting open source components and persuading organizations to approve them.

Which open source solutions are right for your project?

Nobody wants to reinvent the wheel—or should I say, Python—during a big project. You’ve got enough to do already! Often it makes sense to turn to pre-built open source components and libraries. They can save you countless hours, freeing up time to focus on the features that differentiate your product. But how should you decide when to opt for open source? When presented with numerous choices, how do you select the best open source solutions for your company and project? Here are some of the recommendations we discussed during the panel discussion.

  1. Do you have the right staff? A new environment can add complexity to your project. It helps if people on the team have familiarity with the tool or have time to learn it. If your team understands the code, you don’t have to wait for a fix from the community to address bugs. As Lenny said at the conference, “The advantage of open source is that you can get in there and see what’s going on. But if you are learning as you go, it may slow you down. It helps to have the knowledge and capability to support the environment.”
  2. Is the component widely adopted? If lots of developers are using the software, it’s more likely the code is stable. With more eyes on the code, problems get surfaced and resolved faster.
  3. How active is the community? Ideally, the library and components that you use will be maintained and enhanced for years after you deploy it, but there’s no guarantee—that’s also true for commercial options, by the way. An active community makes it more likely that the project will be supported. Check to see when the base was last updated. Confirm that members answer questions from users.
  4. Is there a long-term vision for the technology? Look for a published roadmap for the project. A roadmap will give you confidence that people are committed to supporting and enhancing the project. It will also help you decide if the open source project aligns with your product roadmap. “For us, a big signal is the roadmap. Does the community have a vision? Do they have a path to get there?” asked Kelley.
  5. Is there a commercial organization associated with the project? Another way to identify a project that is here for the long term is if there is a commercial interest associated with it. If a commercial company is providing funding or support to an open source project, it’s more likely that the support will continue even as community members change. Lenny told the audience, “If there is a commercial funding arm, that gives me peace of mind that the tool is less likely to just disappear.”

Getting legal (or executives or product owners) on board

Choosing the perfect open source solution for your project won’t help if you can’t persuade product owners, legal departments, or executives to approve it. Many organizations and individuals worry about the risks associated with using open source. They may wonder if legal issues will arise if they don’t use the software properly. If the software lacks support or includes security bugs will the component put the company at risk? The following tips can help you mitigate these concerns:

  1. Adopt change management methodologies: Organizational change is hard because the unknown feels riskier than the known. Leverage existing risk management structures to help your organization evaluate and adopt open source. Familiar processes will help others become more comfortable with new tools. As Inigo said, “Recent research shows that in order to get change through, you need to reduce the perceived risk of adopting said change. To lower those barriers, leverage what the organization already has in terms of governance to transform this visceral fear of the unknown into something that is known and can be managed through existing processes.”
  2. Implement component lifecycle management: Develop a process to determine which components are acceptable for people in your organization to use. By testing components or doing static and dynamic analysis, you reduce the level of risk and can build more confidence with executives.
  3. Identify a champion: If someone in your organization is responsible for mitigating concerns with product owners and legal teams, it will speed up the process.
  4. Enlist help from the open source project: Many open source communities include people who can help you make the business case to your approvers. As Kelley said, “It’s also our job in the open source community to help have these conversations. We can’t just sit passively by and let the enterprise folks figure it out. We need to evangelize our own message. There are many open source projects with people like Lenny and me who will help you make the case.”

Microsoft believes that the only way we can solve our biggest security challenges is to work together. Open source is one way to do that. Next time you look for an open source solution consider trying today’s tips to help you select the right tools and gain acceptance in your organization.

Learn more

Next month, I’ll follow up this post with more details on how to implement component lifecycle management at your organization.

In the meantime, explore some of Microsoft’s open source solutions, such as The Microsoft Graph Toolkit, DeepSpeed, misticpy, and Attack Surface Analyzer.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. Or reach out to me on LinkedIn or Twitter.

The post Build support for open source in your organization appeared first on Microsoft Security.

Success in security: reining in entropy

May 20th, 2020 No comments

Your network is unique. It’s a living, breathing system evolving over time. Data is created. Data is processed. Data is accessed. Data is manipulated. Data can be forgotten. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment. No two networks on the planet are exactly the same, even if they operate within the same industry, utilize the exact same applications, and even hire workers from one another. In fact, the only attribute your network may share with another network is simply how unique they are from one another.

If we follow the analogy of an organization or network as a living being, it’s logical to drill down deeper, into the individual computers, applications, and users that function as cells within our organism. Each cell is unique in how it’s configured, how it operates, the knowledge or data it brings to the network, and even the vulnerabilities each piece carries with it. It’s important to note that cancer begins at the cellular level and can ultimately bring down the entire system. But where incident response and recovery are accounted for, the greater the level of entropy and chaos across a system, the more difficult it becomes to locate potentially harmful entities. Incident Response is about locating the source of cancer in a system in an effort to remove it and make the system healthy once more.

Let’s take the human body for example. A body that remains at rest 8-10 hours a day, working from a chair in front of a computer, and with very little physical activity, will start to develop health issues. The longer the body remains in this state, the further it drifts from an ideal state, and small problems begin to manifest. Perhaps it’s diabetes. Maybe it’s high blood pressure. Or it could be weight gain creating fatigue within the joints and muscles of the body. Your network is similar to the body. The longer we leave the network unattended, the more it will drift from an ideal state to a state where small problems begin to manifest, putting the entire system at risk.

Why is this important? Let’s consider an incident response process where a network has been compromised. As a responder and investigator, we want to discover what has happened, what the cause was, what the damage is, and determine how best we can fix the issue and get back on the road to a healthy state. This entails looking for clues or anomalies; things that stand out from the normal background noise of an operating network. In essence, let’s identify what’s truly unique in the system, and drill down on those items. Are we able to identify cancerous cells because they look and act so differently from the vast majority of the other healthy cells?

Consider a medium-size organization with 5,000 computer systems. Last week, the organization was notified by a law enforcement agency that customer data was discovered on the dark web, dated from two weeks ago. We start our investigation on the date we know the data likely left the network. What computer systems hold that data? What users have access to those systems? What windows of time are normal for those users to interact with the system? What processes or services are running on those systems? Forensically we want to know what system was impacted, who was logging in to the system around the timeframe in question, what actions were performed, where those logins came from, and whether there are any unique indicators. Unique indicators are items that stand out from the normal operating environment. Unique users, system interaction times, protocols, binary files, data files, services, registry keys, and configurations (such as rogue registry keys).

Our investigation reveals a unique service running on a member server with SQL Server. In fact, analysis shows that service has an autostart entry in the registry and starts the service from a file in the c:\windows\perflogs directory, which is an unusual location for an autostart, every time the system is rebooted. We haven’t seen this service before, so we investigate against all the systems on the network to locate other instances of the registry startup key or the binary files we’ve identified. Out of 5,000 systems, we locate these pieces of evidence on only three systems, one of which is a Domain Controller.

This process of identifying what is unique allows our investigative team to highlight the systems, users, and data at risk during a compromise. It also helps us potentially identify the source of attacks, what data may have been pilfered, and foreign Internet computers calling the shots and allowing access to the environment. Additionally, any recovery efforts will require this information to be successful.

This all sounds like common sense, so why cover it here? Remember we discussed how unique your network is, and how there are no other systems exactly like it elsewhere in the world? That means every investigative process into a network compromise is also unique, even if the same attack vector is being used to attack multiple organizational entities. We want to provide the best foundation for a secure environment and the investigative process, now, while we’re not in the middle of an active investigation.

The unique nature of a system isn’t inherently a bad thing. Your network can be unique from other networks. In many cases, it may even provide a strategic advantage over your competitors. Where we run afoul of security best practice is when we allow too much entropy to build upon the network, losing the ability to differentiate “normal” from “abnormal.” In short, will we be able to easily locate the evidence of a compromise because it stands out from the rest of the network, or are we hunting for the proverbial needle in a haystack? Clues related to a system compromise don’t stand out if everything we look at appears abnormal. This can exacerbate an already tense response situation, extending the timeframe for investigation and dramatically increasing the costs required to return to a trusted operating state.

To tie this back to our human body analogy, when a breathing problem appears, we need to be able to understand whether this is new, or whether it’s something we already know about, such as asthma. It’s much more difficult to correctly identify and recover from a problem if it blends in with the background noise, such as difficulty breathing because of air quality, lack of exercise, smoking, or allergies. You can’t know what’s unique if you don’t already know what’s normal or healthy.

To counter this problem, we pre-emptively bring the background noise on the network to a manageable level. All systems move towards entropy unless acted upon. We must put energy into the security process to counter the growth of entropy, which would otherwise exponentially complicate our security problem set. Standardization and control are the keys here. If we limit what users can install on their systems, we quickly notice when an untrusted application is being installed. If it’s against policy for a Domain Administrator to log in to Tier 2 workstations, then any attempts to do this will stand out. If it’s unusual for Domain Controllers to create outgoing web traffic, then it stands out when this occurs or is attempted.

Centralize the security process. Enable that process. Standardize security configuration, monitoring, and expectations across the organization. Enforce those standards. Enforce the tenet of least privilege across all user levels. Understand your ingress and egress network traffic patterns, and when those are allowed or blocked.

In the end, your success in investigating and responding to inevitable security incidents depends on what your organization does on the network today, not during an active investigation. By reducing entropy on your network and defining what “normal” looks like, you’ll be better prepared to quickly identify questionable activity on your network and respond appropriately. Bear in mind that security is a continuous process and should not stop. The longer we ignore the security problem, the further the state of the network will drift from “standardized and controlled” back into disorder and entropy. And the further we sit from that state of normal, the more difficult and time consuming it will be to bring our network back to a trusted operating environment in the event of an incident or compromise.

The post Success in security: reining in entropy appeared first on Microsoft Security.

Cybersecurity best practices to implement highly secured devices

May 20th, 2020 No comments

Almost three years ago, we published The Seven Properties of Highly Secured Devices, which introduced a new standard for IoT security and argued, based on an analysis of best-in-class devices, that seven properties must be present on every standalone device that connects to the internet in order to be considered secured. Azure Sphere, now generally available, is Microsoft’s entry into the market: a seven-properties-compliant, end-to-end product offering for building and deploying highly secured IoT devices.

Every connected device should be highly secured, even devices that seem simplistic, like a cactus watering sensor. The seven properties are always required. These details are captured in a new paper titled, Nineteen cybersecurity best practices used to implement the seven properties of highly secured devices in Azure Sphere. It focuses on why the seven properties are always required and describes best practices used to implement Azure Sphere. The paper provides detailed information about the architecture and implementation of Azure Sphere and discusses design decisions and trade-offs. We hope that the new paper can assist organizations and individuals in evaluating the measures used within Azure Sphere to improve the security of IoT devices. Companies may also want to use this paper as a reference, when assessing Azure Sphere or other IoT offerings.  In this blog post, we discuss one issue covered in the paper: why are the 7 properties always required?

Why are the seven properties applicable to every device that connects to the internet?

If an internet-connected device performs a non-critical function, why does it require all seven properties? Put differently, are the seven properties required only when a device might cause harm if it is hacked? Why would you still want to require an advanced CPU, a security subsystem, a hardware root of trust, and a set of services to secure a simple, innocuous device like a cactus water sensor?

Because any device can be the target of a hacker, and any hacked device can be weaponized.

Consider the Mirai botnet, a real-world example of IoT gone wrong. The Mirai botnet involved approximately 150,000 internet-enabled security cameras. The cameras were hacked and turned into a botnet that launched a distributed denial of service (DDoS) attack that took down internet access for a large portion of the eastern United States. For security experts analyzing this hack, the Mirai botnet was distressingly unsophisticated. It was also a relatively small-scale attack, considering that many IoT devices will sell more than 150,000 units.

Adding internet connectivity to a class of device means a single, remote attack can scale to hundreds of thousands or millions of devices. The ability to scale a single exploit to this degree is cause for reflection on the upheaval IoT brings to the marketplace. Once the decision is made to connect a device to the internet, that device has the potential to transform from a single-purpose device to a general-purpose computer capable of launching a DDoS attack against any target in the world. The Mirai botnet is also a demonstration that a manufacturer does not need to sell many devices to create the potential for a “weaponized” device.

IoT security is not only about “safety-critical” deployments. Any deployment of a connected device at scale requires the seven properties. In other words, the function, purpose, and cost of a device should not be the only considerations when deciding whether security is important.

The seven properties do not guarantee that a device will not be hacked. However, they greatly minimize certain classes of threats and make it possible to detect and respond when a hacker gains a toehold in a device ecosystem. If a device doesn’t have all seven, human practices must be implemented to compensate for the missing features. For example, without renewable security, a security incident will require disconnecting devices from the internet and then recalling those devices or dispatching people to manually patch every device that was attacked.

Implementation challenges

Some of the seven properties, such as a hardware-based root of trust and compartmentalization, require certain silicon features. Others, such as defense in-depth, require a certain software architecture as well as silicon features like the hardware-based root of trust. Finally, other properties, including renewable security, certificate-based authentication, and failure reporting, require not only silicon features and certain software architecture choices within the operating system, but also deep integration with cloud services. Piecing these critical pieces of infrastructure together is difficult and prone to errors. Ensuring that a device incorporates these properties could therefore increase its cost.

These challenges led us to believe the seven properties also created an opportunity for security-minded organizations to implement these properties as a platform, which would free device manufacturers to focus on product features, rather than security. Azure Sphere represents such a platform: the seven properties are designed and built into the product from the silicon up.

Best practices for implementing the seven properties

Based on our decades of experience researching and implementing secured products, we identified 19 best practices that were put into place as part of the Azure Sphere product. These best practices provide insight into why Azure Sphere sets such a high standard for security. Read the full paper, Nineteen cybersecurity best practices used to implement the seven properties of highly secured devices in Azure Sphere, for the in-depth discussion of each of these best practices and how they—along with the seven properties themselves—guided our design decisions.

We hope that the discussion of these best practices sheds some additional light on the large number of features the Azure Sphere team implemented to protect IoT devices. We also hope that this provides a new set of questions to consider in evaluating your own IoT solution. Azure Sphere will continue to innovate and build upon this foundation with more features that raise the bar in IoT security.

To read previous blogs on IoT security, visit our blog series:  https://www.microsoft.com/security/blog/iot-security/   Be sure to bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity

The post Cybersecurity best practices to implement highly secured devices appeared first on Microsoft Security.

Microsoft Build brings new innovations and capabilities to keep developers and customers secure

May 19th, 2020 No comments

As both organizations and developers adapt to the new reality of working and collaborating in a remote environment, it’s more important than ever to ensure that their experiences are secure and trusted. As part of this week’s Build virtual event, we’re introducing new Identity innovation to help foster a secure and trustworthy app ecosystem, as well as announcing a number of new capabilities in Azure to help secure customers.

New Identity capabilities to help foster a secure apps ecosystem

As organizations continue to adapt to the new requirements of remote work, we’ve seen an increase in the deployment and usage of cloud applications. These cloud applications often need access to user or company data, which has increased the need to provide strong security not just for users but applications themselves. Today we are announcing several capabilities for developers, admins, and end-users that help foster a secure and trustworthy app ecosystem:

  1. Publisher Verification allows developers to demonstrate to customers, with a verified checkmark, that the application they’re using comes from a trusted and authentic source. Applications marked as publisher verified means that the publisher has verified their identity through the verification process with the Microsoft Partner Network (MPN) and has associated their MPN account with their application registration.
  2. Application consent policies allow admins to configure policies that determine which applications users can consent to. Admins can allow users to consent to applications that have been Publisher Verified, helping developers unlock user-driven adoption of their apps.
  3. Microsoft authentication libraries (MSAL) for Angular is generally available and our web library identity.web for ASP.NET Core is in public preview. MSAL make it easy to implement the right authentication patterns, security features, and integration points that support any Microsoft identity—from Azure Active Directory (Azure AD) accounts to Microsoft accounts.

In addition, we’re making it easier for organizations and developers to secure, manage and build apps that connect with different types of users outside an organization with Azure AD External Identities now in preview. With Azure AD External Identities, developers can build flexible, user-centric experiences that enable self-service sign-up and sign-in and allow continuous customization without duplicating coding effort.

You can learn even more about our Identity-based solutions and additional announcements by heading over to the Azure Active Directory Tech Community blog and reading Alex Simons’ post.

Azure Security Center innovations

Azure Security Center is a unified infrastructure security management system for both Azure and hybrid cloud resources on-premises or in other clouds. We’re pleased to announce two new innovations for Azure Security Center, both of which will help secure our customers:

First, we’re announcing that the Azure Secure Score API is now available to customers, bringing even more innovation to Secure Score, which is a central component of security posture management in Azure Security Center. The recent enhancements to Secure Score (in preview) gives customers an easier to understand and more effective way to assess risk in their environment and prioritize which action to take first in order to reduce it.  It also simplifies the long list of findings by grouping the recommendations into a set of Security Controls, each representing an attack surface and scored accordingly.

Second, we’re announcing that suppression rules for Azure Security Center alerts are now publicly available. Customers can use suppression rules to reduce alerts fatigue and focus on the most relevant threats by hiding alerts that are known to be innocuous or related to normal activities in their organization. Suppressed alerts will be hidden in Azure Security Center and Azure Sentinel but will still be available with ‘dismissed’ state. You can learn more about suppression rules by visiting Suppressing alerts from Azure Security Center’s threat protection.

Azure Disk Encryption and encryption & key management updates

We continue to invest in encryption options for our customers. Here are our most recent updates:

  1. Fifty more Azure services now support customer-managed keys for encryption at rest. This helps customers control their encryption keys to meet their compliance or regulatory requirements. The full list of services is here. We have now made this capability part of the Azure Security Benchmark, so that our customers can govern use of all your Azure services in a consistent manner.
  2. Azure Disk Encryption helps protect data on disks that are used with VM and VM Scale sets, and we have now added the ability to use Azure Disk Encryption to secure Red Hat Enterprise Linux BYOS Gold Images. The subscription must be registered before Azure Disk Encryption can be enabled.

Azure Key Vault innovation

Azure Key Vault is a unified service for secret management, certificate management, and encryption key management, backed by FIPS-validated hardware security modules (HSMs). Here are some of the new capabilities we are bringing for our customers:

  1. Enhanced security with Private Link—This is an optional control that enables customers to access their Azure Key Vault over a private endpoint in their virtual network. Traffic between their virtual network and Azure Key Vault flows over the Microsoft backbone network, thus providing additional assurance.
  2. More choices for BYOK—Some of our customers generate encryption keys outside Azure and import them into Azure Key Vault, in order to meet their regulatory needs or to centralize where their keys are generated. Now, in addition to nCipher nShield HSMs, they can also use SafeNet Luna HSMs or Fortanix SDKMS to generate their keys. These additions are in preview.
  3. Make it easier to rotate secrets—Earlier we released a public preview of notifications for keys, secrets, and certificates. This allows customers to receive events at each point of the lifecycle of these objects and define custom actions. A common action is rotating secrets on a schedule so that they can limit the impact of credential exposure. You can see the new tutorial here.

Platform security innovation

Platform security for customers’ data recently took a big step forward with the General Availability of Azure Confidential Computing. Using the latest Intel SGX CPU hardware backed by attestation, Azure provides a new class of VMs that protects the confidentiality and integrity of customer data while in memory (or “in-use”), ensuring that cloud administrators and datacenter operators with physical access to the servers cannot access the customer’s data.

Customer Lockbox for Microsoft Azure provides an interface for customers to review and approve or reject customer data access requests. It is used in cases where a Microsoft engineer needs to access customer data during a support request. In addition to expanded coverage of services in Customer Lockbox for Microsoft Azure, this feature is now available in preview for our customers in Azure Government cloud.

You can learn more about our Azure security offerings by heading to the Azure Security Center Tech Community.

The post Microsoft Build brings new innovations and capabilities to keep developers and customers secure appeared first on Microsoft Security.

Operational resilience in a remote work world

May 18th, 2020 No comments

Microsoft CEO Satya Nadella recently said, “We have seen two years’ worth of digital transformation in two months.” This is a result of many organizations having to adapt to the new world of document sharing and video conferencing as they become distributed organizations overnight.

At Microsoft, we understand that while the current health crisis we face together has served as this forcing function, some organizations might not have been ready for this new world of remote work, financially or organizationally. Just last summer, a simple lightning strike caused the U.K.’s National Grid to suffer the biggest blackout in decades. It affected homes across the country, shut down traffic signals, and closed some of the busiest train stations in the middle of the Friday evening rush hour. Trains needed to be manually rebooted causing delays and disruptions. And, when malware shut down the cranes and security gates at Maersk shipping terminals, as well as most of the company’s IT network—from the booking site to systems handling cargo manifests, it took two months to rebuild all the software systems, and three months before all cargo in transit was tracked down—with recovery dependent on a single server having been accidentally offline during the attack due to the power being cut off.

Cybersecurity provides the underpinning to operationally resiliency as more organizations adapt to enabling secure remote work options, whether in the short or long term. And, whether natural or manmade, the difference between success or struggle to any type of disruption requires a strategic combination of planning, response, and recovery. To maintain cyber resilience, one should be regularly evaluating their risk threshold and an organization’s ability to operationally execute the processes through a combination of human efforts and technology products and services.

While my advice is often a three-pronged approach of turning on multi-factor authentication (MFA)—100 percent of your employees, 100 percent of the time—using Secure Score to increase an organization’s security posture and having a mature patching program that includes containment and isolation of devices that cannot be patched, we must also understand that not every organization’s cybersecurity team may be as mature as another.

Organizations must now be able to provide their people with the right resources so they are able to securely access data, from anywhere, 100 percent of the time. Every person with corporate network access, including full-time employees, consultants, and contractors, should be regularly trained to develop a cyber-resilient mindset. They shouldn’t just adhere to a set of IT security policies around identity-based access control, but they should also be alerting IT to suspicious events and infections as soon as possible to help minimize time to remediation.

Our new normal means that risks are no longer limited to commonly recognized sources such as cybercriminals, malware, or even targeted attacks. Moving to secure remote work environment, without a resilience plan in place that does not include cyber resilience increases an organization’s risk.

Before COVID, we knew that while a majority of firms have a disaster recovery plan on paper, nearly a quarter never test that, and only 42 percent of global executives are confident their organization could recover from a major cyber event without it affecting their business.

Operational resilience cannot be achieved without a true commitment to, and investment in, cyber resilience. We want to help empower every organization on the planet by continuing to share our learnings to help you reach the state where core operations and services won’t be disrupted by geopolitical or socioeconomic events, natural disasters, or even cyber events.

Learn more about our guidance related to COVID-19 here, and bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Operational resilience in a remote work world appeared first on Microsoft Security.

Open-sourcing new COVID-19 threat intelligence

May 14th, 2020 No comments

A global threat requires a global response. While the world faces the common threat of COVID-19, defenders are working overtime to protect users all over the globe from cybercriminals using COVID-19 as a lure to mount attacks. As a security intelligence community, we are stronger when we share information that offers a more complete view of attackers’ shifting techniques. This more complete view enables us all to be more proactive in protecting, detecting, and defending against attacks.

At Microsoft, our security products provide built-in protections against these and other threats, and we’ve published detailed guidance to help organizations combat current threats (Responding to COVID-19 together). Our threat experts are sharing examples of malicious lures and we have enabled guided hunting of COVID-themed threats using Azure Sentinel Notebooks. Microsoft processes trillions of signals each day across identities, endpoint, cloud, applications, and email, which provides visibility into a broad range of COVID-19-themed attacks, allowing us to detect, protect, and respond to them across our entire security stack. Today, we take our COVID-19 threat intelligence sharing a step further by making some of our own indicators available publicly for those that are not already protected by our solutions. Microsoft Threat Protection (MTP) customers are already protected against the threats identified by these indicators across endpoints with Microsoft Defender Advanced Threat Protection (ATP) and email with Office 365 ATP.

In addition, we are publishing these indicators for those not protected by Microsoft Threat Protection to raise awareness of attackers’ shift in techniques, how to spot them, and how to enable your own custom hunting. These indicators are now available in two ways. They are available in the Azure Sentinel GitHub and through the Microsoft Graph Security API. For enterprise customers who use MISP for storing and sharing threat intelligence, these indicators can easily be consumed via a MISP feed.

This threat intelligence is provided for use by the wider security community, as well as customers who would like to perform additional hunting, as we all defend against malicious actors seeking to exploit the COVID crisis.

This COVID-specific threat intelligence feed represents a start at sharing some of Microsoft’s COVID-related IOCs. We will continue to explore ways to improve the data over the duration of the crisis. While some threats and actors are still best defended more discreetly, we are committed to greater transparency and taking community feedback on what types of information is most useful to defenders in protecting against COVID-related threats. This is a time-limited feed. We are maintaining this feed through the peak of the outbreak to help organizations focus on recovery.

Protection in Azure Sentinel and Microsoft Threat Protection

Today’s release includes file hash indicators related to email-based attachments identified as malicious and attempting to trick users with COVID-19 or Coronavirus-themed lures. The guidance below provides instructions on how to access and integrate this feed in your own environment.

For Azure Sentinel customers, these indicators can be either be imported directly into Azure Sentinel using a Playbook or accessed directly from queries.

The Azure Sentinel Playbook that Microsoft has authored will continuously monitor and import these indicators directly into your Azure Sentinel ThreatIntelligenceIndicator table. This Playbook will match with your event data and generate security incidents when the built-in threat intelligence analytic templates detect activity associated to these indicators.

These indicators can also be accessed directly from Azure Sentinel queries as follows:

let covidIndicators = (externaldata(TimeGenerated:datetime, FileHashValue:string, FileHashType: string )
[@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/Microsoft.Covid19.Indicators.csv"]
with (format="csv"));
covidIndicators

Azure Sentinel logs.

A sample detection query is also provided in the Azure Sentinel GitHub. With the table definition above, it is as simple as:

  1. Join the indicators against the logs ingested into Azure Sentinel as follows:
covidIndicators
| join ( CommonSecurityLog | where TimeGenerated >= ago(7d)
| where isnotempty(FileHashValue)
) on $left.FileHashValue == $right.FileHash
  1. Then, select “New alert rule” to configure Azure Sentinel to raise incidents based on this query returning results.

CyberSecurityDemo in Azure Sentinel logs.

You should begin to see Alerts in Azure Sentinel for any detections related to these COVID threat indicators.

Microsoft Threat Protection provides protection for the threats associated with these indicators. Attacks with these Covid-19-themed indicators are blocked by Office 365 ATP and Microsoft Defender ATP.

While MTP customers are already protected, they can also make use of these indicators for additional hunting scenarios using the MTP Advanced Hunting capabilities.

Here is a hunting query to see if any process created a file matching a hash on the list.

let covidIndicators = (externaldata(TimeGenerated:datetime, FileHashValue:string, FileHashType: string )
[@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/Microsoft.Covid19.Indicators.csv"]
with (format="csv"))
| where FileHashType == 'sha256' and TimeGenerated > ago(1d);
covidIndicators
| join (DeviceFileEvents
| where Timestamp > ago(1d)
| where ActionType == 'FileCreated'
| take 100) on $left.FileHashValue  == $right.SHA256

Advanced hunting in Microsoft Defender Security Center.

This is an Advanced Hunting query in MTP that searches for any recipient of an attachment on the indicator list and sees if any recent anomalous log-ons happened on their machine. While COVID threats are blocked by MTP, users targeted by these threats may be at risk for non-COVID related attacks and MTP is able to join data across device and email to investigate them.

let covidIndicators = (externaldata(TimeGenerated:datetime, FileHashValue:string, FileHashType: string )    [@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/Microsoft.Covid19.Indicators.csv"] with (format="csv"))
| where FileHashType == 'sha256' and TimeGenerated > ago(1d);
covidIndicators
| join (  EmailAttachmentInfo  | where Timestamp > ago(1d)
| project NetworkMessageId , SHA256
) on $left.FileHashValue  == $right.SHA256
| join (
EmailEvents
| where Timestamp > ago (1d)
) on NetworkMessageId
| project TimeEmail = Timestamp, Subject, SenderFromAddress, AccountName = tostring(split(RecipientEmailAddress, "@")[0])
| join (
DeviceLogonEvents
| project LogonTime = Timestamp, AccountName, DeviceName
) on AccountName
| where (LogonTime - TimeEmail) between (0min.. 90min)
| take 10

Advanced hunting in Microsoft 365 security.

Connecting an MISP instance to Azure Sentinel

The indicators published on the Azure Sentinel GitHub page can be consumed directly via MISP’s feed functionality. We have published details on doing this at this URL: https://aka.ms/msft-covid19-misp. Please refer to the Azure Sentinel documentation on connecting data from threat intelligence providers.

Using the indicators if you are not an Azure Sentinel or MTP customer

Yes, the Azure Sentinel GitHub is public: https://aka.ms/msft-covid19-Indicators

Examples of phishing campaigns in this threat intelligence

The following is a small sample set of the types of COVID-themed phishing lures using email attachments that will be represented in this feed. Beneath each screenshot are the relevant hashes and metadata.

Figure 1: Spoofing WHO branding with “cure” and “vaccine” messaging with a malicious .gz file.

Name: CURE FOR CORONAVIRUS_pdf.gz

World Health Organization phishing email.

Figure 2: Spoofing Red Cross Safety Tips with malicious .docm file.

Name: COVID-19 SAFETY TIPS.docm

Red Cross phishing email.

Figure 3: South African banking lure promoting COVID-19 financial relief with malicious .html files.

Name: SBSA-COVID-19-Financial Relief.html

Financial relief phishing email.

Figure 4: French language spoofed correspondence from the WHO with malicious XLS Macro file.

Name:✉-Covid-19 Relief Plan5558-23636sd.htm

Coronavirus-themed phishing email.

If you have questions or feedback on this COVID-19 feed, please email msft-covid19-ti@microsoft.com.

The post Open-sourcing new COVID-19 threat intelligence appeared first on Microsoft Security.

Solving Uninitialized Stack Memory on Windows

May 13th, 2020 No comments

This blog post outlines the work that Microsoft is doing to eliminate uninitialized stack memory vulnerabilities from Windows and why we’re on this path. This blog post will be broken down into a few parts that folks can jump to: Uninitialized Memory Background Potential Solutions to Uninitialized Memory Vulnerabilities InitAll – Automatic Initialization Interesting Findings …

Solving Uninitialized Stack Memory on Windows Read More »

The post Solving Uninitialized Stack Memory on Windows appeared first on Microsoft Security Response Center.

Secured-core PCs help customers stay ahead of advanced data theft

May 13th, 2020 No comments

Researchers at the Eindhoven University of Technology recently revealed information around “Thunderspy,” an attack that relies on leveraging direct memory access (DMA) functionality to compromise devices. An attacker with physical access to a system can use Thunderspy to read and copy data even from systems that have encryption with password protection enabled.

Secured-core PCs provide customers with Windows 10 systems that come configured from OEMs with a set of hardware, firmware, and OS features enabled by default, mitigating Thunderspy and any similar attacks that rely on malicious DMA.

How Thunderspy works

Like any other modern attack, “Thunderspy” relies on not one but multiple building blocks being chained together to deliver protection from these kinds of targeted attacks. Below is a summary of how Thunderspy can be used to access a system where the attacker does not have the password needed to sign in. A video from the Thunderspy research team showing the attack is available here.

Step 1: A serial peripheral interface (SPI) flash programmer called Bus Pirate is plugged into the SPI flash of the device being attacked. This gives access to the Thunderbolt controller firmware and allows an attacker to copy it over to the attacker’s device

Step 2: The Thunderbolt Controller Firmware Patcher (tcfp), which is developed as part of Thunderspy, is used to disable the security mode enforced in the Thunderbolt firmware copied over using the Bus Pirate device in Step 1

Step 3: The modified insecure Thunderbolt firmware is written back to the SPI flash of the device being attacked

Step 4: A Thunderbolt-based attack device is connected to the device being attacked, leveraging the PCILeech tool to load a kernel module that bypasses the Windows sign-in screen

Diagram showing how the Thunderspy attack works

The result is that an attacker can access a device without knowing the sign-in password for the device. This means that even if a device was powered off or locked by the user, someone that could get physical access to the device in the time it takes to run the Thunderspy process could sign in and exfiltrate data from the system or install malicious software.

Secured-core PC protections

In order to counteract these targeted, modern attacks, Secured-core PCs use a defense-in-depth strategy that leverage features like Windows Defender System Guard and virtualization-based security (VBS) to mitigate risk across multiple areas, delivering comprehensive protection against attacks like Thunderspy.

Mitigating Steps 1 to 4 of the Thunderspy attack with Kernel DMA protection

Secured-core PCs ship with hardware and firmware that support Kernel DMA protection, which is enabled by default in the Windows OS. Kernel DMA protection relies on the Input/Output Memory Management Unit (IOMMU) to block external peripherals from starting and performing DMA unless an authorized user is signed in and the screen is unlocked. Watch this video from the 2019 Microsoft Ignite to see how Windows mitigates DMA attacks.

This means that even if an attacker was able to copy a malicious Thunderbolt firmware to a device, the Kernel DMA protection on a Secured-core PC would prevent any accesses over the Thunderbolt port unless the attacker gains the user’s password in addition to being in physical possession of the device, significantly raising the degree of difficulty for the attacker.

Hardening protection for Step 4 with Hypervisor-protected code integrity (HVCI)

Secured-core PCs ship with hypervisor protected code integrity (HVCI) enabled by default. HVCI utilizes the hypervisor to enable VBS and isolate the code integrity subsystem that verifies that all kernel code in Windows is signed from the normal kernel. In addition to isolating the checks, HVCI also ensures that kernel code cannot be both writable and executable, ensuring that unverified code does not execute.

HVCI helps to ensure that malicious kernel modules like the one used in Step 4 of the Thunderspy attack cannot execute easily as the kernel module would need to be validly signed, not revoked, and not rely on overwriting executable kernel code.

Modern hardware to combat modern threats

A growing portfolio of Secured-core PC devices from the Windows OEM ecosystem are available for customers. They provide a consistent guarantee against modern threats like Thunderspy with the variety of choices that customers expect to choose from when acquiring Windows hardware. You can learn more here: https://www.microsoft.com/en-us/windowsforbusiness/windows10-secured-core-computers

 

Nazmus Sakib

Enterprise and OS Security 

The post Secured-core PCs help customers stay ahead of advanced data theft appeared first on Microsoft Security.

Empowering your remote workforce with end-user security awareness

May 13th, 2020 No comments

COVID-19 has rapidly transformed how we all work. Organizations need quick and effective user security and awareness training to address the swiftly changing needs of the new normal for many of us. To help our customers deploy user training quickly, easily and effectively, we are announcing the availability of the Microsoft Cybersecurity Awareness Kit, delivered in partnership with Terranova Security. For those of you ready to deploy training right now, access your kit here. For more details, read on.

Work at home may happen on unmanaged and shared devices, over insecure networks, and in unauthorized or non-compliant apps. The new environment has put cybersecurity decision-making in the hands of remote employees. In addition to the rapid dissolution of corporate perimeters, the threat environment is evolving rapidly as malicious actors take advantage of the current situation to mount coronavirus-themed attacks. As security professionals, we can empower our colleagues to protect themselves and their companies. But choosing topics, producing engaging content, and managing delivery can be challenging, sucking up time and resources. Our customers need immediate deployable and context-specific security training.

CYBERSECURITY AWARENESS KIT

At RSA 2020 this year, we announced our partnership with Terranova Security, to deliver integrated phish simulation and user training in Office 365 Advanced Threat Protection later this year. Our partnership combines Microsoft’s leading-edge technology, expansive platform capabilities, and unparalleled threat insights with Terranova Security’s market-leading expertise, human-centric design and pedagogical rigor. Our intelligent solution will turbo-charge the effectiveness of phish simulation and training while simplifying administration and reporting. The solution will create and recommend context-specific and hyper-targeted simulations, enabling you to customize your simulations to mimic real threats seen in different business contexts and train users based on their risk level. It will automate simulation management from end to end, providing robust analytics to inform the next cycle of simulations and enable rich reporting.

Our Cybersecurity Awareness Kit now makes available a subset of this user-training material relevant to COVID-19 scenarios to aid security professionals tasked with training their newly remote workforces. The kit includes videos, interactive courses, posters, and infographics like the one below. You can use these materials to train your remote employees quickly and easily.

Beware of COVID-19 Cyber Scams

For Security Professionals, we have created a simple way to host and deliver the training material within your own environment or direct your users to the Microsoft 365 security portal, where the training are hosted as seen below. All authenticated Microsoft 365 users will be able to access the training on the portal. Admins will see the option to download the kit as well. Follow the simple steps, detailed in the README, to deploy the awareness kits to your remote workforce.

For Security Professionals, we have created a simple way to host and deliver the training material within your own environment or direct your users to the M365 security portal, where the trainings are hosted as seen below. All authenticated M365 users will be able to access the training on the portal. Admins will see the option to download the kit as well. Follow the simple steps, detailed in the README, to deploy the awareness kits to your remote workforce.

ACCESSING THE KIT

All Microsoft 365 customers can access the kit and directions on the Microsoft 365 Security and Compliance Center through this link. If you are not a Microsoft 365 customer or would like to share the training with family and friends who are not employees of your organization, Terranova Security is providing free training material for end-users.

Deploying quick and effective end-user training to empower your remote workforce is one of the ways Microsoft can help customers work productively and securely through COVID-19. For more resources to help you through these times, Microsoft’s Secure Remote Work Page for the latest information.

The post Empowering your remote workforce with end-user security awareness appeared first on Microsoft Security.

CISO stress-busters: post #1 overcoming obstacles

May 11th, 2020 No comments

As part of the launch of the U.S. space program’s moon shot, President Kennedy famously said we do these things “not because they are easy, but because they are hard.” The same can be said for the people responsible for security at their organizations; it is not a job one takes because it is easy. But it is critically important to keep our digital lives and work safe. And for the CISOs and leaders of the world, it is a job that is more than worth the hardships.

Recent research from Nominet paints a concerning picture of a few of those hardships. Forty-eight percent of CISO respondents indicated work stress had negatively impacted their mental health, this is almost double the number from last year’s survey. Thirty-one percent reported job stress had negatively impacted their physical health and 40 percent have seen their job stress impacting their personal lives. Add a fairly rapid churn rate (26 months on average) to all that stress and it’s clear CISOs are managing a tremendous amount of stress every day. And when crises hit, from incident response after a breach to a suddenly remote workforce after COVID-19, that stress only shoots higher.

Which is why we’re starting this new blog series called “CISO stress-busters.” In the words of CISOs from around the globe, we’ll be sharing insights, guidance, and support from peers on the front lines of the cyber workforce. Kicking us off—the main challenges that CISOs face and how they turn those obstacles into opportunity. The goal of the series is to be a bit of chicken (or chik’n for those vegans out there) soup for the CISO’s soul.

Today’s post features wisdom from three CISOs/Security Leaders:

  • TM Ching, Security CTO at DXC Technology
  • Jim Eckart, (former) CISO at Coca-Cola
  • Jason Golden, CISO at Mainstay Technologies

Clarifying contribution

Ask five different CEOs what their CISOs do and after the high level “manage security” answer you’ll probably get five very different explanations. This is partly because CISO responsibility can vary widely from company to company. So, it’s no surprise that many of the CISOs we interviewed touched on this point.

TM Ching summed it up this way, “Demonstrating my role to the organization can be a challenge—a role like mine may be perceived as symbolic” or that security is just here to “slow things down.” For Jason, “making sure that business leaders understand the difference between IT Operations, Cybersecurity, and InfoSec” can be difficult because execs “often think all of those disciplines are the same thing” and that since IT Ops has the products and solutions, they own security. Jim also bumped up against confusion about the security role with multiple stakeholders pushing and pulling in different directions like “a CIO who says ‘here is your budget,’ a CFO who says ‘why are you so expensive?’ and a general counsel who says ‘we could be leaking information everywhere.’”

What works:

  • Educate Execs—about the role of a CISO. Helping them “understand that it takes a program, that it’s a discipline.” One inflection point is after a breach, “you may be sitting there with an executive, the insurance company, their attorneys, maybe a forensics company and it always looks the same. The executive is looking down the table at the wide-eyed IT person saying ‘What happened?’” It’s a opportunity to educate, to help “make sure the execs understand the purpose of risk management.”—Jason Golden.   To see how to do this watch Microsoft CISO Series Episode 2 Part 1:  Security is everyone’s Business
  • Show Don’t Tell—“It is important to constantly demonstrate that I am here to help them succeed, and not to impose onerous compliance requirements that stall their projects.”—TM Ching
  • Accountability Awareness—CISOs do a lot, but one thing they shouldn’t do is to make risk decisions for the business in a vacuum. That’s why it’s critical to align “all stakeholders (IT, privacy, legal, financial, security, etc.) around the fact that cybersecurity and compliance are business risk issues and not IT issues. IT motions are (and should be) purely in response to the business’ decision around risk tolerance.”—Jim Eckart

Exerting influence

Fans of Boehm’s curve know that the earlier security can be introduced into a process, the less expensive it is to fix defects and flaws. But it’s not always easy for CISOs to get security a seat at the table whether it’s early in the ideation process for a new customer facing application or during financial negotiations to move critical workloads to the cloud. As TM put it, “Exerting influence to ensure that projects are secured at Day 0. This is possibly the hardest thing to do.” And because “some business owners do not take negative news very well” telling them their new app baby is “security ugly” the day before launch can be a gruesome task. And as Jason pointed out, “it’s one thing to talk hypothetically about things like configuration management and change management and here are the things that you need to do to meet those controls so you can keep your contract. It’s a different thing to get that embedded in operations so that IT and HR all the way through finance are following the rules for change management and configuration management.”

What Works:

  • Negotiate engagement—To avoid the last minute “gotchas” or bolting on security after a project has deployed, get into the conversation as early as possible. This isn’t easy, but as TM explains, it can be done. “It takes a lot of negotiations to convince stakeholders why it will be beneficial for them in the long run to take a pause and put the security controls in place, before continuing with their projects.”
  • Follow frameworks—Well-known frameworks like the NIST Cybersecurity Framework, NIST SP800-53, and SP800-37 can help CISOs “take things from strategy to operations” by providing baselines and best practices for building security into the entire organization and systems lifecycle. And that will pay off in the long run; “when the auditors come calling, they’re looking for evidence that you’re following your security model and embedding that throughout the organization.” —Jason

Cultivating culture

Wouldn’t it be wonderful if every company had a security mindset and understood the benefits of having a mature, well-funded security and risk management program? If every employee understood what a phish looks like and why they should report it? Unfortunately, most companies aren’t laser focused on security, leaving that education work up to the CISO and their team. And having those conversations with stakeholders that sometimes have conflicting agendas requires technical depth and robust communication skills. That’s not easy. As Jim points out, “it’s a daunting scope of topics to be proficient in at all levels.

What works:

  • Human firewalls—All the tech controls in the world won’t stop 100 percent of attacks, people need to be part of the solution too. “We can address administrative controls, technical controls, physical controls, but you also need to address the culture and human behavior, or the human firewalls. You know you’re only going to be marginally successful if you don’t engage employees too.” —Jason
  • Know your audience—CISOs need to cultivate “depth and breadth. On any given day, I needed to move from board-level conversations (where participants barely understand security) all the way to the depths of zero day vulnerabilities, patching, security architecture.” —Jim

Did you find these insights helpful? What would you tell your fellow CISOs about overcoming obstacles? What works for you? Please reach out to me on LinkedIn and let me know what you thought of this article and if you’re interested in being interviewed for one of our upcoming posts.

The post CISO stress-busters: post #1 overcoming obstacles appeared first on Microsoft Security.

Microsoft researchers work with Intel Labs to explore new deep learning approaches for malware classification

May 8th, 2020 No comments

The opportunities for innovative approaches to threat detection through deep learning, a category of algorithms within the larger framework of machine learning, are vast. Microsoft Threat Protection today uses multiple deep learning-based classifiers that detect advanced threats, for example, evasive malicious PowerShell.

In continued exploration of novel detection techniques, researchers from Microsoft Threat Protection Intelligence Team and Intel Labs are collaborating to study new applications of deep learning for malware classification, specifically:

  • Leveraging deep transfer learning technique from computer vision to static malware classification
  • Optimizing deep learning techniques in terms of model size and leveraging platform hardware capabilities to improve execution of deep-learning malware detection approaches

For the first part of the collaboration, the researchers built on Intel’s prior work on deep transfer learning for static malware classification and used a real-world dataset from Microsoft to ascertain the practical value of approaching the malware classification problem as a computer vision task. The basis for this study is the observation that if malware binaries are plotted as grayscale images, the textural and structural patterns can be used to effectively classify binaries as either benign or malicious, as well as cluster malicious binaries into respective threat families.

The researchers used an approach that they called static malware-as-image network analysis (STAMINA). Using the dataset from Microsoft, the study showed that the STAMINA approach achieves high accuracy in detecting malware with low false positives.

The results and further technical details of the research are listed in the paper STAMINA: Scalable deep learning approach for malware classification and set the stage for further collaborative exploration.

The role of static analysis in deep learning-based malware classification

While static analysis is typically associated with traditional detection methods, it remains to be an important building block for AI-driven detection of malware. It is especially useful for pre-execution detection engines: static analysis disassembles code without having to run applications or monitor runtime behavior.

Static analysis produces metadata about a file. Machine learning classifiers on the client and in the cloud then analyze the metadata and determine whether a file is malicious. Through static analysis, most threats are caught before they can even run.

For more complex threats, dynamic analysis and behavior analysis build on static analysis to provide more features and build more comprehensive detection. Finding ways to perform static analysis at scale and with high effectiveness benefits overall malware detection methodologies.

To this end, the research borrowed knowledge from  computer vision domain to build an enhanced static malware detection framework that leverages deep transfer learning to train directly on portable executable (PE) binaries represented as images.

Analyzing malware represented as image

To establish the practicality of the STAMINA approach, which posits that malware can be classified at scale by performing static analysis on malware codes represented as images, the study covered three main steps: image conversion, transfer learning, and evaluation.

Diagram showing the steps for the STAMINA approach: pre-processing, transfer learning, and evaluation

First, the researchers prepared the binaries by converting them into two-dimensional images. This step involved pixel conversion, reshaping, and resizing. The binaries were converted into a one-dimensional pixel stream by assigning each byte a value between 0 and 255, corresponding to pixel intensity. Each pixel stream was then transformed into a two-dimensional image by using the file size to determine the width and height of the image.

The second step was to use transfer learning, a technique for overcoming the isolated learning paradigm and utilizing knowledge acquired for one task to solve related ones. Transfer learning has enjoyed tremendous success within several different computer vision applications. It accelerates training time by bypassing the need to search for optimized hyperparameters and different architectures—all this while maintaining high classification performance. For this study, the researchers used Inception-v1 as the base model.

The study was performed on a dataset of 2.2 million PE file hashes provided by Microsoft. This dataset was temporally split into 60:20:20 segments for training, validation, and test sets, respectively.

Diagram showing a DNN with pre-trained weights on natural images, and the last portion fine-tuned with new data

Finally, the performance of the system was measured and reported on the holdout test set. The metrics captured include recall at specific false positive range, along with accuracy, F1 score, and area under the receiver operating curve (ROC).

Findings

The joint research showed that applying STAMINA to real-world hold-out test data set achieved a recall of 87.05% at 0.1% false positive rate, and 99.66% recall and 99.07% accuracy at 2.58% false positive rate overall. The results certainly encourage the use of deep transfer learning for the purpose of malware classification. It helps accelerate training by bypassing the search for optimal hyperparameters and architecture searches, saving time and compute resources in the process.

The study also highlights the pros and cons of sample-based methods like STAMINA and metadata-based classification methods. For example, STAMINA can go in-depth into samples and extract additional signals that might not be captured in the metadata.  However, for bigger size applications, STAMINA becomes less effective due to limitations in converting billions of pixels into JPEG images and then resizing them. In such cases, metadata-based methods show advantages over our research.

Conclusion and future work

The use of deep learning methods for detecting threats drives a lot of innovation across Microsoft. The collaboration with Intel Labs researchers is just one of the ways in which Microsoft researchers and data scientists continue to explore novel ways to improve security overall.

This joint research is a good starting ground for more collaborative work. For example, the researchers plan to collaborate further on platform acceleration optimizations that can allow deep learning models to be deployed on client machines with minimal performance impact. Stay tuned.

 

Jugal Parikh, Marc Marino

Microsoft Threat Protection Intelligence Team

 

The post Microsoft researchers work with Intel Labs to explore new deep learning approaches for malware classification appeared first on Microsoft Security.

Protect your accounts with smarter ways to sign in on World Passwordless Day

May 7th, 2020 No comments

As the world continues to grapple with COVID-19, our lives have become increasingly dependent on digital interactions. Operating at home, we’ve had to rely on e-commerce, telehealth, and e-government to manage the everyday business of life. Our daily online usage has increased by over 20 percent. And if we’re fortunate enough to have a job that we can do from home, we’re accessing corporate apps from outside the company firewall.

Whether we’re signing into social media, mobile banking, or our workplace, we’re connecting via online accounts that require a username and password. The more we do online, the more accounts we have. It becomes a hassle to constantly create new passwords and remember them. So, we take shortcuts. According to a Ponemon Institute study, people reuse an average of five total passwords, both business and personal. This is one aspect of human nature that hackers bet on. If they get hold of one password, they know they can use it pry open more of our digital lives. A single compromised password, then, can create a chain reaction of liability.

No matter how strong or complex a password is, it’s useless if a bad actor can socially engineer it away from us or find it on the dark web. Plus, passwords are inconvenient and a drain on productivity. People spend hours each year signing into applications and recovering or resetting forgotten usernames and passwords. This activity doesn’t make things more secure. It only drives up the costs of service desks.

People today are done with passwords

Users want something easier and more convenient. Administrators want something more secure. We don’t think anyone finds passwords a cause to celebrate. That’s why we’re helping organizations find smarter ways to sign in that users will love and hackers will hate. Our hope is that instead of World Password Day, we’ll start celebrating World Passwordless Day.

Passwordless Day Infographic

  • People reuse an average of five passwords across their accounts, both business and personal (Ponemon Institute survey/Yubico).
  • Average person has 90 accounts (Thycotic).
  • Fifty-five percent would prefer a method of protecting accounts that doesn’t involve passwords (Ponemon Institute survey/Yubico).
  • Sixty-seven percent of American consumers surveyed by Visa have used biometric authentication and prefer it to passwords.
  • One-hundred million to 150 million people using a passwordless method each month (Microsoft research, April 2020).

Since an average of one in every 250 corporate accounts is compromised each month, we know that relying on passwords isn’t a good enterprise defense strategy. As companies continue to add more business applications to their portfolios, the cost of passwords only goes up. In fact, companies are dedicating 30 to 60 percent of their support desk calls to password resets. Given how ineffective passwords can be, it’s surprising how many companies haven’t turned on multi-factor authentication (MFA) for their customers or employees.

Passwordless technology is here—and users are adopting it as the best experience for strong authentication. Last November at Microsoft Ignite, we shared that more than 100 million people were already signing in using passwordless methods each month. That number has now reached over 150 million people. According to our recent survey, the use of biometrics

We now have the momentum to push forward initiatives that increase security and reduce cost. New passwordless technologies give users the benefits of MFA in one gesture. To sign in securely with Windows Hello, all you have to do is show your face or press your finger. Microsoft has built support for passwordless authentication into our products and services, including Office, Azure, Xbox, and Github. You don’t even need to create a username anymore—you can use your phone number instead. Administrators can use single sign-on in Azure Active Directory (Azure AD) to enable passwordless authentication for an unlimited number of apps through native functionality in Windows Hello, the phone-as-a-token capabilities in the Microsoft Authenticator app, or security keys built using the FIDO2 open standards.

Of course, we would never advise our customers to try anything we haven’t tried ourselves. We’re always our own first customer. Microsoft’s IT team switched to passwordless authentication and now 90 percent of Microsoft employees sign in without entering a password. As a result, hard and soft costs of supporting passwords fell by 87 percent. We expect other customers will experience similar benefits in employee productivity improvements, lower IT costs, and a stronger security posture. To learn more about our approach, watch the CISO spotlight episode with Bret Arsenault (Microsoft CISO) and myself. By taking this approach 18 months ago, we were better set up for seamless secure remote work during COVID 19.

For many of us, working from home will be a new norm for the foreseeable future. We see many opportunities for using passwordless methods to better secure digital accounts that people rely on every day. Whether you’re protecting an organization or your own digital life, every step towards passwordless is a step towards improving your security posture. Now let’s embrace the world of passwordless!

Related articles

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Protect your accounts with smarter ways to sign in on World Passwordless Day appeared first on Microsoft Security.

How to gain 24/7 detection and response coverage with Microsoft Defender ATP

May 6th, 2020 No comments

This blog post is part of the Microsoft Intelligence Security Association guest blog series. To learn more about MISA, go here.

Whether you’re a security team of one or a dozen, detecting and stopping threats around the clock is a challenge. Security incidents don’t happen exclusively during business hours: attackers often wait until the late hours of the night to breach an environment.

At Red Canary, we work with security teams of all shapes and sizes to improve detection and response capabilities. Our Security Operations Team investigates threats in customer environments 24/7/365, removes false positives, and delivers confirmed threats with context. We’ve seen teams run into a wide range of issues when trying to establish after-hours coverage on their own, including:

  • For global enterprises, around-the-clock monitoring can significantly increase the pressure on a U.S.–based security team. If you have personnel around the world, a security team in a single time zone isn’t sufficient to cover the times that computing assets are used in those environments.
  • In smaller companies that don’t have global operations, the security team is more likely to be understaffed and unable to handle 24/7 security monitoring without stressful on-call schedules.
  • For the security teams of one, being “out of office” is a foreign concept. You’re always on. And you need to set up some way to monitor the enterprise while you’re away.

Microsoft Defender Advanced Threat Protection (ATP) is an industry leading endpoint security solution that’s built into Windows with extended capabilities to Mac and Linux servers. Red Canary unlocks the telemetry delivered from Microsoft Defender ATP and investigates every alert, enabling you to immediately increase your detection coverage and waste no time with false positives.

Here’s how those who haven’t started with Red Canary yet can answer the question, “How can I support my 24/7 security needs with Microsoft Defender ATP?”

No matter how big your security team is, the most important first step is notifying the right people based on an on-call schedule. In this post, we’ll describe two different ways of getting Microsoft Defender ATP alerts to your team 24×7 and how Red Canary has implemented this for our customers.

Basic 24/7 via email

Microsoft Defender Security Center allows you to send all Microsoft Defender ATP alerts to an email address. You can set up email alerts under Settings → Alert notifications.

MISA1

Email notification settings in Microsoft Defender Security Center.

These emails will be sent to your team and should be monitored for high severity situations after-hours.

If sent to a ticketing system, these emails can trigger tickets or after-hours pages to be created for your security team. We recommend limiting the alerts to medium and high severity so that you won’t be bothered for informational or low alerts.

MISA2

Setting up alert emails in Microsoft Defender ATP to be sent to a ticketing system.

Now any future alerts will create a new ticket in your ticketing system where you can assign security team members to on-call rotations and notify on-call personnel of new alerts (if supported). Once the notification is received by on-call personnel, they would then log into Microsoft Defender’s Security Center for further investigation and triage. 

Enhanced 24/7 via APIs

What if you want to ingest alerts to a system that doesn’t use email? You can do this by using the Microsoft Defender ATP APIs. First, you’ll need to have an authentication token. You can get the token like we do here:

MISA3

API call to retrieve authentication token.

Once you’ve stored the authentication token you can use it to poll the Microsoft Defender ATP API and retrieve alerts from Microsoft Defender ATP. Here’s an example of the code to pull new alerts.

MISA4

API call to retrieve alerts from Microsoft Defender ATP.

The API only returns a subset of the data associated with each alert. Here’s an example of what you might receive.

MISA5

Example of a Microsoft Defender ATP alert returned from the API.

You can then take this data and ingest it into any of your internal tools. You can learn more about how to access Microsoft Defender ATP APIs in the documentation. Please note, the limited information included in an alert email or API response is not enough to triage the behavior. You will still need to log into the Microsoft Defender Security Center to find out what happened and take appropriate action.

24/7 with Red Canary

By enabling Red Canary, you supercharge your Microsoft Defender ATP deployment by adding a proven 24×7 security operations team who are masters at finding and stopping threats, and an automation platform to quickly remediate and get back to business.

Red Canary continuously ingests all of the raw telemetry generated from your instance of Microsoft Defender ATP as the foundation for our service. We also ingest and monitor Microsoft Defender ATP alerts. We then apply thousands of our own proprietary analytics to identify potential threats that are sent 24/7 to a Red Canary detection engineer for review.

Here’s an overview of the process (to go behind the scenes of these operations check out our detection engineering blog series):

MISA6

Managed detection and response with Red Canary.

Red Canary is monitoring your Microsoft Defender ATP telemetry and alerts. If anything is a confirmed threat, our team creates a detection and sends it to you using a built-in automation framework that supports email, SMS, phone, Microsoft Teams/Slack, and more. Below is an example of what one of those detections might look like.

MISA7

Red Canary confirms threats and prioritizes them so you know what to focus on.

At the top of the detection timeline you’ll receive a short description of what happened. The threat has already been examined by a team of detection engineers from Red Canary’s Cyber Incident Response Team (CIRT), so you don’t have to worry about triage or investigation. As you scroll down, you can quickly see the results of the investigation that Red Canary’s senior detection engineers have done on your behalf, including detailed notes that provide context to what’s happening in your environment:

MISA8

Notes from Red Canary senior detection engineers (in light blue) provide valuable context.

You’re only notified of true threats and not false positives. This means you can focus on responding rather than digging through data to figure out what happened.

What if you don’t want to be woken up, you’re truly unavailable, or you just want bad stuff immediately dealt with? Use Red Canary’s automation to handle remediation on the fly. You and your team can create playbooks in your Red Canary portal to respond to threats immediately, even if you’re unavailable.

MISA9

Red Canary automation playbook.

This playbook allows you to isolate the endpoint (using the Machine Action resource type in the Microsoft Defender ATP APIs) if Red Canary identifies suspicious activity. You also have the option to set up Automate playbooks that depend on an hourly schedule. For example, you may want to approve endpoint isolation during normal work hours, but use automatic isolation overnight:

MISA10

Red Canary Automate playbook to automatically remediate a detection.

Getting started with Red Canary

Whether you’ve been using Microsoft Defender ATP since it’s preview releases or if you’re just getting started, Red Canary is the fastest way to accelerate your security operations program. Immediate onboarding, increased detection coverage, and a 24/7 CIRT team are all at your fingertips.

Terence Jackson, CISO at Thycotic and Microsoft Defender ATP user, describes what it’s like working with Red Canary:

“I have a small team that has to protect a pretty large footprint. I know the importance of detecting, preventing, and stopping problems at the entry point, which is typically the endpoint. We have our corporate users but then we also have SaaS customers we have to protect. Currently my team tackles both, so for me it’s simply having a trusted partner that can take the day-to-day hunting/triage/elimination of false positives and only provide actionable alerts/intel, which frees my team up to do other critical stuff.”

Red Canary is the fastest way to enhance your detection coverage from Microsoft Defender ATP so you know exactly when and where to respond.

Contact us to see a demo and learn more.

The post How to gain 24/7 detection and response coverage with Microsoft Defender ATP appeared first on Microsoft Security.