Azure Security Benchmark—90 security and compliance best practices for your workloads in Azure

January 23rd, 2020 No comments

The Azure security team is pleased to announce that the Azure Security Benchmark v1 (ASB) is now available. ASB is a collection of over 90 security best practices recommendations you can employ to increase the overall security and compliance of all your workloads in Azure.

The ASB controls are based on industry standards and best practices, such as Center for Internet Security (CIS). In addition, ASB preserves the value provided by industry standard control frameworks that have an on-premises focus and makes them more cloud centric. This enables you to apply standard security control frameworks to your Azure deployments and extend security governance practices to the cloud.

ASB v1 includes 11 security controls inspired by, and mapped to, the CIS 7.1 control framework. Over time we’ll add mappings to other frameworks, such as NIST.

ASB also makes it possible to improve the consistency of security documentation for all Azure services by creating a framework where all security recommendations for Azure services are represented in the same format, using the common ASB framework.

ASB includes the following controls:

Documentation for each of the controls contains mappings to industry standard benchmarks (such as CIS), details/rationale for the recommendations, and link(s) to configuration information that will enable the recommendation.

Image showing protection of critical web applications. Azure ID, CIS IDs, and Responsibility.

You can find the full set of controls and the recommendations at the Azure Security Benchmark website. To learn more, see Microsoft intelligent security solutions.

Image of Azure security benchmarks documentation in the Azure security center.

ASB is integrated with Azure Security Center allowing you to track, report, and assess your compliance against the benchmark by using the Security Center compliance dashboard. It has a tab like those you see below. In addition, the ASB impacts Secure Score in Azure Security Center for your subscriptions.

Image showing regulatory compliance standards in the Azure security center.

ASB is the foundation for future Azure service security baselines, which will provide a view of benchmark recommendations that are contextualized for each Azure service. This will make it easier for you to implement the ASB for the Azure services that you’re actually using. Also, keep an eye out our release of mappings to the NIST and other security frameworks.

Send us your feedback

We welcome your feedback on ASB! Please complete the Azure Security Benchmark feedback form. Also, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Azure Security Benchmark—90 security and compliance best practices for your workloads in Azure appeared first on Microsoft Security.

Categories: Azure Security, Compliance, Secure Score Tags:

Microsoft and Zscaler help organizations implement the Zero Trust model

January 23rd, 2020 No comments

While digital transformation is critical to business innovation, delivering security to cloud-first, mobile-first architectures requires rethinking traditional network security solutions. Some businesses have been successful in doing so, while others still remain at risk of very costly breaches.

MAN Energy Solutions, a leader in the marine, energy, and industrial sectors, has been driving cloud transformation across their business. As with any transformation, there were challenges—as they began to adopt cloud services, they quickly realized that the benefits of the cloud would be offset by poor user experience, increasing appliance and networking costs, and an expanded attack surface.

In 2017, MAN Energy Solutions implemented “Blackcloud”—an initiative that establishes secure, one-to-one connectivity between each user and the specific private apps that the user is authorized to access, without ever placing the user on the larger corporate network. A virtual private network (VPN) is no longer necessary to connect to these apps. This mitigates lateral movement of bad actors or malware.

This approach is based on the Zero Trust security model.

Understanding the Zero Trust model

In 2019, Gartner released a Market Guide describing its Zero Trust Network Access (ZTNA) model and making a strong case for its efficacy in connecting employees and partners to private applications, simplifying mergers, and scaling access. Sometimes referred to as software-defined perimeter, the ZTNA model includes a “broker” that mediates connections between authorized users and specific applications.

The Zero Trust model grants application access based on identity and context of the user, such as date/time, geolocation, and device posture, evaluated in real-time. It empowers the enterprise to limit access to private apps only to the specific users who need access to them and do not pose any risk. Any changes in context of the user would affect the trust posture and hence the user’s ability to access the application.

Access governance is done via policy and enabled by two end-to-end, encrypted, outbound micro-tunnels that are spun on-demand (not static IP tunnels like in the case of VPN) and stitched together by the broker. This ensures apps are never exposed to the internet, thus helping to reduce the attack surface.

As enterprises witness and respond to the impact of increasingly lethal malware, they’re beginning to transition to the Zero Trust model with pilot initiatives, such as securing third-party access, simplifying M&As and divestitures, and replacing aging VPN clients. Based on the 2019 Zero Trust Adoption Report by Cybersecurity Insiders, 59 percent of enterprises plan to embrace the Zero Trust model within the next 12 months.

Implement the Zero Trust model with Microsoft and Zscaler

Different organizational requirements, existing technology implementations, and security stages affect how the Zero Trust model implementation takes place. Integration between multiple technologies, like endpoint management and SIEM, helps make implementations simple, operationally efficient, and adaptive.

Microsoft has built deep integrations with Zscaler—a cloud-native, multitenant security platform—to help organizations with their Zero Trust journey. These technology integrations empower IT teams to deliver a seamless user experience and scalable operations as needed, and include:

Azure Active Directory (Azure AD)—Enterprises can leverage powerful authentication tools—such as Multi-Factor Authentication (MFA), conditional access policies, risk-based controls, and passwordless sign-in—offered by Microsoft, natively with Zscaler. Additionally, SCIM integrations ensure adaptability of user access. When a user is terminated, privileges are automatically modified, and this information flows automatically to the Zscaler cloud where immediate action can be taken based on the update.

Microsoft Endpoint Manager—With Microsoft Endpoint Manager, client posture can be evaluated at the time of sign-in, allowing Zscaler to allow or deny access based on the security posture. Microsoft Endpoint Manager can also be used to install and configure the Zscaler app on managed devices.

Azure Sentinel—Zscaler’s Nanolog Streaming Service (NSS) can seamlessly integrate with Azure to forward detailed transactional logs to the Azure Sentinel service, where they can be used for visualization and analytics, as well as threat hunting and security response.

Implementation of the Zscaler solution involves deploying a lightweight gateway software, on endpoints and in front of the applications in AWS and/or Azure. Per policies defined in Microsoft Endpoint Manager, Zscaler creates secure segments between the user devices and apps through the Zscaler security cloud, where brokered micro-tunnels are stitched together in the location closest to the user.

Infographic showing Zscaler Security and Policy Enforcement. Internet Destinations and Private Apps appear in clouds. Azure Sentinel, Microsoft Endpoint Manager, and Azure Active Directory appear to the right and left. In the center is a PC.

If you’d like to learn more about secure access to hybrid apps, view the webinar on Powering Fast and Secure Access to All Apps with experts from Microsoft and Zscaler.

Rethink security for the cloud-first, mobile-first world

The advent of cloud-based apps and increasing mobility are key drivers forcing enterprises to rethink their security model. According to Gartner’s Market Guide for Zero Trust Network Access (ZTNA) “by 2023, 60 percent of enterprises will phase out most of their remote access VPNs in favor of ZTNA.” Successful implementation depends on using the correct approach. I hope the Microsoft-Zscaler partnership and platform integrations help you accomplish the Zero Trust approach as you look to transform your business to the cloud.

For more information on the Zero Trust model, visit the Microsoft Zero Trust page. Also, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Microsoft and Zscaler help organizations implement the Zero Trust model appeared first on Microsoft Security.

Access Misconfiguration for Customer Support Database

January 22nd, 2020 No comments

Today, we concluded an investigation into a misconfiguration of an internal customer support database used for Microsoft support case analytics. While the investigation found no malicious use, and although most customers did not have personally identifiable information exposed, we want to be transparent about this incident with all customers and reassure them that we are taking …

Access Misconfiguration for Customer Support Database Read More »

The post Access Misconfiguration for Customer Support Database appeared first on Microsoft Security Response Center.

Categories: Misconfiguration Tags:

sLoad launches version 2.0, Starslord

January 21st, 2020 No comments

sLoad, the PowerShell-based Trojan downloader notable for its almost exclusive use of the Windows BITS service for malicious activities, has launched version 2.0. The new version comes on the heels of a comprehensive blog we published detailing the malware’s multi-stage nature and use of BITS as alternative protocol for data exfiltration and other behaviors.

With the new version, sLoad has added the ability to track the stage of infection on every affected machine. Version 2.0 also packs an anti-analysis trick that could identify and isolate analyst machines vis-à-vis actual infected machines.

We’re calling the new version “Starslord” based on strings in the malware code, which has clues indicating that the name “sLoad” may have been derived from a popular comic book superhero.

We discovered the new sLoad version over the holidays, in our continuous monitoring of the malware. New sLoad campaigns that use version 2.0 follow an attack chain similar to the previous version, with some updates, including dropping the dynamic list of command-and-control (C2) servers and upload of screenshots.

Tracking the stage of infection

With the ability to track the stage of infection, malware operators with access to the Starslord backend could build a detailed view of infections across affected machines and segregate these machines into different groups.

The tracking mechanism exists in the final-stage, which, as with the old version, loops infinitely (with sleep interval of 2400 seconds, higher than the 1200 seconds in version 1.0). In line with the previous version, at every iteration of the final stage, the malware uses a download BITS job to exfiltrate stolen system information and receive additional payloads from the active C2 server.

As we noted in our previous blog, creating a BITS job with an extremely large RemoteURL parameter that includes non-encrypted system information, as the old sLoad version did, stands out and is relatively easy to detect. However, with Starslord, the system information is encoded into Base64 data before being exfiltrated.

The file received by Starslord in response to the exfiltration BITS job contains a tuple of three values separated by an asterisk (*):

  • Value #1 is a URL to download additional payload using a download BITS job
  • Value #2 specifies the action, which can be any of the following, to be taken on the payload downloaded from the URL in value#1:
    • “eval” – Run (possibly very large) PowerShell scripts
    • “iex” – Load and invoke (possibly small) PowerShell code
    • “run” – Download encoded PE file, decode using exe, and run the decoded executable
  • Value #3 is an integer that can signify the stage of infection for the machine

Supplying the payload URL as part of value #1 allows the malware infrastructure to house additional payloads on different servers from the active C2 servers responding to the exfiltration BITS jobs.

Value#3 is the most noteworthy component in this setup. If the final stage succeeds in downloading additional payload using the URL provided in value #1 and executing it as specified by the command in value #2, then a variable is used to form the string “td”:”<value#3>”,”tds”:”3”. However, if the final stage fails to download and execute the payload, then the string formed is “td”:”<value #3>”,”tds”:”4”.

The infinite loop ensures that the exfiltration BITS jobs are created at a fixed interval. The backend infrastructure can then pick up the pulse from each infected machine. However, unlike the previous version, Starslord includes the said string in succeeding iterations of data exfiltration. This means that the malware infrastructure is always aware of the exact stage of the infection for a specific affected machine. In addition, since the numeric value for value #3 in the tuple is always governed by the malware infrastructure, malware operators can compartmentalize infected hosts and could potentially set off individual groups on unique infection paths. For example, when responding to exfiltration BITS jobs, malware operators can specify a different URL (value #1) and action (value #2) for each numeric value for value #3 of the tuple, essentially deploying a different malware payload for different groups.

Anti-analysis trap

Starslord comes built-in with a function named checkUniverse, which is in-fact an anti-analysis trap.

As mentioned in our previous blog post, the final stage of sLoad is a piece of PowerShell code obtained by decoding one of the dropped .ini files. The PowerShell code appears in the memory as a value assigned to a variable that is then executed using the Invoke-Expression cmdlet. Because this is a huge piece of decrypted PowerShell code that never hits the disk, security researchers would typically dump it into a file on the disk for further analysis.

The sLoad dropper PowerShell script drops four files:

  • a randomly named .tmp file
  • a randomly named .ps1 file
  • a ini file
  • a ini file

It then creates a scheduled task to run the .tmp file every 3 minutes, similar to the previous version. The .tmp file is a proxy that does nothing but run the .ps1 file, which decrypts the contents of main.ini into the final stage. The final stage then decrypts contents of domain.ini to obtain active C2 and perform other activities as documented.

As a unique anti-analysis trap, Starslord ensures that the .tmp and.ps1 files have the same random name. When an analyst dumps the decrypted code of the final stage into a file in the same folder as the .tmp and .ps1 files, the analyst could end up naming it something other than the original random name. When this dumped code is run from such differently named file on the disk, a function named checkUniverse returns the value 1, and the analyst gets trapped:

What comes next is not very desirable for a security researcher: being profiled by the malware operator.

If the host belongs to a trapped analyst, the file downloaded from the backend in response to the exfiltration BITS job, if any, is discarded and overwritten by the following new tuple:

hxxps://<active C2>/doc/updx2401.jpg*eval*-1

In this case, the value #1 of the tuple is a URL that’s known to the backend for being associated with trapped hosts. BITS jobs from trapped hosts don’t always get a response. If they do, it’s a copy of the dropper PowerShell script. This could be to create an illusion that the framework is being updated as the URL in value #1 of the tuple suggests (hxxps://<active C2>/doc/updx2401.jpg).

However, the string that is included in all successive exfiltration BITS jobs from such host is “td”:”-1”,”tds”:”3”, eventually leading to all such hosts getting grouped under value “td”:”-1”. This forms the group of all trapped machines that are never delivered a payload. For the rest, so far, evidence suggests that it has been delivering the file infector Ramnit intermittently.

Durable protection against evolving malware

sLoad’s multi-stage attack chain, use of mutated intermediate scripts and BITS as an alternative protocol, and its polymorphic nature in general make it a piece malware that can be quite tricky to detect. Now, it has evolved into a new and polished version Starlord, which retains sLoads most basic capabilities but does away with spyware capabilities in favor of new and more powerful features, posing even higher risk.

Starslord can track and group affected machines based on the stage of infection, which can allow for unique infection paths. Interestingly, given the distinct reference to a fictional superhero, these groups can be thought of as universes in a multiverse. In fact, the malware uses a function called checkUniverse to determine if a host is an analyst machine.

Microsoft Threat Protection defends customers from sophisticated and continuously evolving threats like sLoad using multiple industry-leading security technologies that protect various attack surfaces. Through signal-sharing across multiple Microsoft services, Microsoft Threat Protection delivers comprehensive protection for identities, endpoints, data, apps, and infrastructure.

On endpoints, behavioral blocking and containment capabilities in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) ensure durable protection against evolving threats. Through cloud-based machine learning and data science informed by threat research, Microsoft Defender ATP can spot and stop malicious behaviors from threats, both old and new, in real-time.

 

 

Sujit Magar

Microsoft Defender ATP Research Team

The post sLoad launches version 2.0, Starslord appeared first on Microsoft Security.

How companies can prepare for a heightened threat environment

January 20th, 2020 No comments

With high levels of political unrest in various parts of the world, it’s no surprise we’re also in a period of increased cyber threats. In the past, a company’s name, political affiliations, or religious affiliations might push the risk needle higher. However, in the current environment any company could be a potential target for a cyberattack. Companies of all shapes, sizes, and varying security maturity are asking what they could and should be doing to ensure their safeguards are primed and ready. To help answer these questions, I created a list of actions companies can take and controls they can validate in light of the current level of threats—and during any period of heightened risk—through the Microsoft lens:

  • Implement Multi-Factor Authentication (MFA)—It simply cannot be said enough—companies need MFA. The security posture at many companies is hanging by the thread of passwords that are weak, shared across social media, or already for sale. MFA is now the standard authentication baseline and is critical to basic cyber hygiene. If real estate is “location, location, location,” then cybersecurity is “MFA, MFA, MFA.” To learn more, read How to implement Multi-Factor Authentication (MFA).
  • Update patching—Check your current patch status across all environments. Make every attempt to patch all vulnerabilities and focus on those with medium or higher risk if you must prioritize. Patching is critically important as the window between discovery and exploit of vulnerabilities has shortened dramatically. Patching is perhaps your most important defense and one that, for the most part, you control. (Most attacks utilize known vulnerabilities.)
  • Manage your security posture—Check your Secure Score and Compliance Score for Office 365, Microsoft 365, and Azure. Also, take steps to resolve all open recommendations. These scores will help you to quickly assess and manage your configurations. See “Resources and information for detection and mitigation strategies” below for additional information. (Manage your scores over time and use them as a monitoring tool for unexpected consequences from changes in your environment.)
  • Evaluate threat detection and incident response—Increase your threat monitoring and anomaly detection activities. Evaluate your incident response from an attacker’s perspective. For example, attackers often target credentials. Is your team prepared for this type of attack? Are you able to engage left of impact? Consider conducting a tabletop exercise to consider how your organization might be targeted specifically.
  • Resolve testing issues—Review recent penetration test findings and validate that all issues were closed.
  • Validate distributed denial of service (DDoS) protection—Does your organization have the protection you need or stable access to your applications during a DDoS attack? These attacks have continued to grow in frequency, size, sophistication, and impact. They often are utilized as a “cyber smoke screen” to mask infiltration attacks. Your DDoS protection should be always on, automated for network layer mitigation, and capable of near real-time alerting and telemetry.
  • Test your resilience—Validate your backup strategies and plans, ensuring offline copies are available. Review your most recent test results and conduct additional testing if needed. If you’re attacked, your offline backups may be your strongest or only lifeline. (Our incident response teams often find companies are surprised to discover their backup copies were accessible online and were either encrypted or destroyed by the attacker.)
  • Prepare for incident response assistance—Validate you have completed any necessary due diligence and have appropriate plans to secure third-party assistance with responding to an incident/attack. (Do you have a contract ready to be signed? Do you know who to call? Is it clear who will decide help is necessary?)
  • Train your workforce—Provide a new/specific round of training and awareness information for your employees. Make sure they’re vigilant to not click unusual links in emails and messages or go to unusual or risky URLs/websites, and that they have strong passwords. Emphasize protecting your company contributes to the protection of the financial economy and is a matter of national security.
  • Evaluate physical security—Step up validation of physical IDs at entry points. Ensure physical reviews of your external perimeter at key offices and datacenters are being carried out and are alert to unusual indicators of access attempts or physical attacks. (The “see something/say something” rule is critically important.)
  • Coordinate with law enforcement—Verify you have the necessary contact information for your local law enforcement, as well as for your local FBI office/agent (federal law enforcement). (Knowing who to call and how to reach them is a huge help in a crisis.)

The hope, of course, is there will not be any action against any company. Taking the actions noted above is good advice for any threat climate—but particularly in times of increased risk. Consider creating a checklist template you can edit as you learn new ways to lower your risk and tighten your security. Be sure to share your checklist with industry organizations such as FS-ISAC. Finally, if you have any questions, be sure to reach out to your account team at Microsoft.

Resources and information for detection and mitigation strategies

In addition, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

About the author

Lisa Lee is a former U.S. banking regulator who helped financial institutions of all sizes prepare their defenses against cyberattacks and reduce their threat landscape. In her current role with Microsoft, she advises Chief Information Security Officers (CISOs) and other senior executives at large financial services companies on cybersecurity, compliance, and identity. She utilizes her unique background to share insights about preparing for the current cyber threat landscape.

The post How companies can prepare for a heightened threat environment appeared first on Microsoft Security.

Changing the monolith—Part 2: Whose support do you need?

January 16th, 2020 No comments

In Changing the monolith—Part 1: Building alliances for a secure culture, I explored how security leaders can build alliances and why a commitment to change must be signaled from the top. But whose support should you recruit in the first place? In Part 2, I address considerations for the cybersecurity team itself, the organization’s business leaders, and the employees whose buy-in is critical.

Build the right cybersecurity team

It could be debated that the concept of a “deep generalist” is an oxymoron. The analogy I frequently find myself making is you would never ask a dermatologist to perform a hip replacement. A hip replacement is best left to an orthopedic surgeon who has many hours of hands-on experience performing hip replacements. This does not lessen the importance of the dermatologist, who can quickly identify and treat potentially lethal diseases such as skin cancer.

Similarly, not every cybersecurity and privacy professional is deep in all subjects such as governance, technology, law, organizational dynamics, and emotional intelligence. No person is born a specialist.

If you are looking for someone who is excellent at threat prevention, detection, and incident response, hire someone who specializes in those specific tasks and has demonstrated experience and competency. Likewise, be cautious of promoting cybersecurity architects to the role of Chief Information Security Officer (CISO) if they have not demonstrated strategic leadership with the social aptitude to connect with other senior leaders in the organization. CISOs, after all, are not technology champions as much as they are business leaders.

Keep business leaders in the conversation

Leaders can enhance their organizations’ security stance by sending a top-down message across all business units that “security begins with me.” One way to send this message is to regularly brief the executive team and the board on cybersecurity and privacy risks.

Image of three coworkers working at a desk in an office.

Keep business leaders accountable about security.

These should not be product status reports, but briefings on key performance indicators (KPI) of risk. Business leaders must inform what the organization considers to be its top risks.

Here are three ways to guide these conversations:

  1. Evaluate the existing cyber-incident response plan within the context of the overall organization’s business continuity plan. Elevate cyber-incident response plans to account for major outages, severe weather, civil unrest, and epidemics—which all place similar, if not identical, stresses to the business. Ask leadership what they believe the “crown jewels” to be, so you can prioritize your approach to data protection. The team responsible for identifying the “crown jewels” should include senior management from the lines of businesses and administrative functions.
  2. Review the cybersecurity budget with a business case and a strategy in mind. Many times, security budgets take a backseat to other IT or business priorities, resulting in companies being unprepared to deal with risks and attacks. An annual review of cybersecurity budgets tied to what looks like a “good fit” for the organization is recommended.
  3. Reevaluate cyber insurance on an annual basis and revisit its use and requirements for the organization. Ensure that it’s effective against attacks that could be considered “acts of war,” which might otherwise not be covered by the organization’s policy. Review your policy and ask: What happens if the threat actor was a nation state aiming for another nation state, placing your organization in the crossfire?

Gain buy-in through a frictionless user experience

Shadow IT” is a persistent problem when there is no sanctioned way for users to collaborate with the outside world. Similarly, users save and hoard emails when, in response to an overly zealous data retention policy, their emails are deleted after 30 days.

Digital transformation introduces a sea of change in how cybersecurity is implemented. It’s paramount to provide the user with the most frictionless user experience available, adopting mobile-first, cloud-first philosophies.

Ignoring the user experience in your change implementation plan will only lead users to identify clever ways to circumvent frustrating security controls. Look for ways to prioritize the user experience even while meeting security and compliance goals.

Incremental change versus tearing off the band-aid

Imagine slowly replacing the interior and exterior components of your existing vehicle one by one until you have a “new” car. It doesn’t make sense: You still have to drive the car, even while the replacements are being performed!

Similarly, I’ve seen organizations take this approach in implementing change, attempting to create a modern workplace over a long period of time. However, this draws out complex, multi-platform headaches for months and years, leading to user confusion, loss of confidence in IT, and lost productivity. You wouldn’t “purchase” a new car this way; why take this approach for your organization?

Rather than mixing old parts with new parts, you would save money, shop time, and operational (and emotional) complexity by simply trading in your old car for a new one.

Fewer organizations take this alternative approach of “tearing off the band-aid.” If the user experience is frictionless, more efficient, and enhances the ease of data protection, an organization’s highly motivated employee base will adapt much more easily.

Stayed tuned and stay updated

Stay tuned for more! In my next installments, I will cover the topics of process and technology, respectively, and their role in changing the security monolith. Technology on its own solves nothing. What good are building supplies and tools without a blueprint? Similarly, process is the orchestration of the effort, and is necessary to enhance an organization’s cybersecurity, privacy, compliance, and productivity.

In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Changing the monolith—Part 2: Whose support do you need? appeared first on Microsoft Security.

Categories: Compliance, cybersecurity Tags:

Introducing Microsoft Application Inspector

January 16th, 2020 No comments

Modern software development practices often involve building applications from hundreds of existing components, whether they’re written by another team in your organization, an external vendor, or someone in the open source community. Reuse has great benefits, including time-to-market, quality, and interoperability, but sometimes brings the cost of hidden complexity and risk.

You trust your engineering team, but the code they write often accounts for only a tiny fraction of the entire application. How well do you understand what all those external software components actually do? You may find that you’re placing as much trust in each of the thousands of contributors to those components as you have in your in-house engineering team.

At Microsoft, our software engineers use open source software to provide our customers high-quality software and services. Recognizing the inherent risks in trusting open source software, we created a source code analyzer called Microsoft Application Inspector to identify “interesting” features and metadata, like the use of cryptography, connecting to a remote entity, and the platforms it runs on.

Application Inspector differs from more typical static analysis tools in that it isn’t limited to detecting poor programming practices; rather, it surfaces interesting characteristics in the code that would otherwise be time-consuming or difficult to identify through manual introspection. It then simply reports what’s there, without judgement.

For example, consider this snippet of Python source code:

Here we can see that a program that downloads content from a URL, writes it to the file system, and then executes a shell command to list details of that file. If we run this code through Application Inspector, we’ll see the following features identified which tells us a lot about what it can do:

  • FileOperation.Write
  • Network.Connection.Http
  • Process.DynamicExecution

In this small example, it would be trivial to examine the snippet manually to identify those same features, but many components contain tens of thousands of lines of code, and modern web applications often use hundreds of such components. Application Inspector is designed to be used individually or at scale and can analyze millions of lines of source code from components built using many different programming languages. It’s simply infeasible to attempt to do this manually.

Application Inspector is positioned to help in key scenarios

We use Application Inspector to identify key changes to a component’s feature set over time (version to version), which can indicate anything from an increased attack surface to a malicious backdoor. We also use the tool to identify high-risk components and those with unexpected features that require additional scrutiny, under the theory that a vulnerability in a component that is involved in cryptography, authentication, or deserialization would likely have higher impact than others.

Using Application Inspector

Application Inspector is a cross-platform, command-line tool that can produce output in multiple formats, including JSON and interactive HTML. Here is an example of an HTML report:

Each icon in the report above represents a feature that was identified in the source code. That feature is expanded on the right-hand side of the report, and by clicking any of the links, you can view the source code snippets that contributed to that identification.

Each feature is also broken down into more specific categories and an associated confidence, which can be accessed by expanding the row.

Application Inspector comes with hundreds of feature detection patterns covering many popular programming languages, with good support for the following types of characteristics:

  • Application frameworks (development, testing)
  • Cloud / Service APIs (Microsoft Azure, Amazon AWS, and Google Cloud Platform)
  • Cryptography (symmetric, asymmetric, hashing, and TLS)
  • Data types (sensitive, personally identifiable information)
  • Operating system functions (platform identification, file system, registry, and user accounts)
  • Security features (authentication and authorization)

Get started with Application Inspector

Application Inspector can identify interesting features in source code, enabling you to better understand the software components that your applications use. Application Inspector is open source, cross-platform (.NET Core), and can be downloaded at github.com/Microsoft/ApplicationInspector. We welcome all contributions and feedback.

The post Introducing Microsoft Application Inspector appeared first on Microsoft Security.

Categories: Automation, Security Intelligence Tags:

Security baseline (FINAL) for Chromium-based Microsoft Edge, version 79

January 15th, 2020 No comments

Microsoft is pleased to announce the enterprise-ready release of the recommended security configuration baseline settings for the next version of Microsoft Edge based on Chromium, version 79. The settings recommended in this baseline are identical to the ones we recommended in the version 79 draft, minus one setting that we have removed and that we discuss below. We continue to welcome feedback through the Baselines Discussion site.


 


The baseline package is now available as part of the Security Compliance Toolkit. Like all our baseline packages, the downloadable baseline package includes importable GPOs, a script to apply the GPOs to local policy, a script to import the GPOs into Active Directory Group Policy, and all the recommended settings in spreadsheet form, as Policy Analyzer rules, and as GP Reports.


 


Microsoft Edge is being rebuilt with the open-source Chromium project, and many of its security configuration options are inherited from that project. These Group Policy settings are entirely distinct from those for the original version of Microsoft Edge built into Windows 10: they are in different folders in the Group Policy editor and they reference different registry keys. The Group Policy settings that control the new version of Microsoft Edge are located under “Administrative Templates\Microsoft Edge,” while those that control the current version of Microsoft Edge remain located under “Administrative Templates\Windows Components\Microsoft Edge.” You can download the latest policy templates for the new version of Microsoft Edge from the Microsoft Edge Enterprise landing page. To learn more about managing the new version of Microsoft Edge, see Configure Microsoft Edge for Windows.


 


As with our current Windows and Office security baselines, our recommendations for Microsoft Edge configuration follow a streamlined and efficient approach to baseline definition when compared with the baselines we published before Windows 10. The foundation of that approach is essentially this:



  • The baselines are designed for well-managed, security-conscious organizations in which standard end users do not have administrative rights.

  • A baseline enforces a setting only if it mitigates a contemporary security threat and does not cause operational issues that are worse than the risks they mitigate.

  • A baseline enforces a default only if it is otherwise likely to be set to an insecure state by an authorized user:

    • If a non-administrator can set an insecure state, enforce the default.

    • If setting an insecure state requires administrative rights, enforce the default only if it is likely that a misinformed administrator will otherwise choose poorly.




(For further explanation, see the “Why aren’t we enforcing more defaults?” section in this blog post.)


 


Version 79 of the Chromium-based version of Microsoft Edge has 216 enforceable Computer Configuration policy settings and another 200 User Configuration policy settings. Following our streamlined approach, our recommended baseline configures a grand total of eleven Group Policy settings. You can find full documentation in the download package’s Documentation subdirectory.


 


The one difference between this baseline and the version 79 draft is that we have removed the recommendation to disable “Force Microsoft Defender SmartScreen checks on downloads from trusted sources.” By default, SmartScreen will perform these checks. While performing checks on files from trusted sources increases the likelihood of false positives – particularly from intranet sources that host files that are seldom if ever seen in the outside world – we have decided not to apply that decision to all customers adopting our baseline. Depending on who can store files in locations that are considered “trusted sources” and the rigor they apply to restricting what gets stored there, internal sites might in fact end up hosting untrustworthy content that should be checked. Our baseline therefore neither enables nor disables the setting. Organizations choosing to disable this setting can therefore do so without contradicting our baseline recommendations.

Categories: Uncategorized Tags:

Announcing MSRC 2019 Q4 Security Researcher Leaderboard

January 15th, 2020 No comments

Following the first Security Researcher Quarterly Leaderboard we published in October 2019, we are excited to announce the MSRC Q4 2019 Security Researcher Leaderboard, which shows the top contributing researchers for the last quarter. In each quarterly leaderboard, we recognize the security researchers who ranked at or above the 95th percentile line based on the …

Announcing MSRC 2019 Q4 Security Researcher Leaderboard Read More »

The post Announcing MSRC 2019 Q4 Security Researcher Leaderboard appeared first on Microsoft Security Response Center.

How to implement Multi-Factor Authentication (MFA)

January 15th, 2020 No comments

Another day, another data breach. If the regular drumbeat of leaked and phished accounts hasn’t persuaded you to switch to Multi-Factor Authentication (MFA) already, maybe the usual January rush of ‘back to work’ password reset requests is making you reconsider. When such an effective option for protecting accounts is available, why wouldn’t you deploy it straight away?

The problem is that deploying MFA at scale is not always straightforward. There are technical issues that may hold you up, but the people side is where you have to start. The eventual goal of an MFA implementation is to enable it for all your users on all of your systems all of the time, but you won’t be able to do that on day one.

To successfully roll out MFA, start by being clear about what you’re going to protect, decide what MFA technology you’re going to use, and understand what the impact on employees is going to be. Otherwise, your MFA deployment might grind to a halt amid complaints from users who run into problems while trying to get their job done.

Before you start on the technical side, remember that delivering MFA across a business is a job for the entire organization, from the security team to business stakeholders to IT departments to HR and to corporate communications and beyond, because it has to support all the business applications, systems, networks and processes without affecting workflow.

Campaign and train

Treat the transition to MFA like a marketing campaign where you need to sell employees on the idea—as well as provide training opportunities along the way. It’s important for staff to understand that MFA is there to support them and protect their accounts and all the their data, because that may not be their first thought when met with changes to the way they sign in to the tools they use every day. If you run an effective internal communications campaign that makes it clear to users what they need to do and, more importantly, why they need to do it, you’ll avoid them seeing MFA as a nuisance or misunderstanding it as ‘big brother’ company tracking.

The key is focusing on awareness: in addition to sending emails—put up posters in the elevator, hang banner ads in your buildings, all explaining why you’re making the transition to MFA. Focus on informing your users, explaining why you’re making this change—making it very clear what they will need to do and where they can find instructions, documentation, and support.

Also, provide FAQs and training videos, along with optional training sessions or opportunities to opt in to an early pilot group (especially if you can offer them early access to a new software version that will give them features they need). Recognize that MFA is more work for them than just using a password, and that they will very likely be inconvenienced. Unless you are able to use biometrics on every device they will have to get used to carrying a security key or a device with an authenticator app with them all the time, so you need them to understand why MFA is so important.

It’s not surprising that users can be concerned about a move to MFA. After all, MFA has sometimes been done badly in the consumer space. They’ll have seen stories about social networks abusing phone numbers entered for security purposes for marketing or of users locked out of their accounts if they’re travelling and unable to get a text message. You’ll need to reassure users who have had bad experiences with consumer MFA and be open to feedback from employees about the impact of MFA policies. Like all tech rollouts, this is a process.

If you’re part of an international business you have more to do, as you need to account for global operations. That needs wider buy-in and a bigger budget, including language support if you must translate training and support documentation. If you don’t know where to start, Microsoft provides communication templates and user documentation you can customize for your organization.

Start with admin accounts

At a minimum, you want to use MFA for all your admins, so start with privileged users. Administrative accounts are your highest value targets and the most urgent to secure, but you can also treat them as a proof of concept for wider adoption. Review who these users are and what privileges they have—there are probably more accounts than you expect with far more privileges than are really needed.

At the same time, look at key business roles where losing access to email—or having unauthorized emails sent—will have a major security impact. Your CEO, CFO, and other senior leaders need to move to MFA to protect business communications.

Use what you’ve learned to roll out MFA to high value groups to plan a pilot deployment—which includes employees from across the business who require different levels of security access—so your final MFA deployment is optimized for mainstream employees without hampering the productivity of those working with more sensitive information, whether that’s the finance team handling payroll or developers with commit rights. Consider how you will cover contractors and partners who need access as well.

Plan for wider deployment

Start by looking at what systems you have that users need to sign in to that you can secure with MFA. Remember that includes on-premises systems—you can incorporate MFA into your existing remote access options, using Active Directory Federation Services (AD FS), or Network Policy Server and use Azure Active Directory (Azure AD) Application Proxy to publish applications for cloud access.

Concentrate on finding any networks or systems where deploying MFA will take more work (for example, if SAML authentication is used) and especially on discovering vulnerable apps that don’t support anything except passwords because they use legacy or basic authentication. This includes older email systems using MAPI, EWS, IMAP4, POP3, SMTP, internal line of business applications, and elderly client applications. Upgrade or update these to support modern authentication and MFA where you can. Where this isn’t possible, you’ll need to restrict them to use on the corporate network until you can replace them, because critical systems that use legacy authentication will block your MFA deployment.

Be prepared to choose which applications to prioritize. As well as an inventory of applications and networks (including remote access options), look at processes like employee onboarding and approval of new applications. Test how applications work with MFA, even when you expect the impact to be minimal. Create a new user without admin access, use that account to sign in with MFA and go through the process of configuring and using the standard set of applications staff will use to see if there are issues. Look at how users will register for MFA and choose which methods and factors to use, and how you will track and audit registrations. You may be able to combine MFA registration with self-service password reset (SSPR) in a ‘one stop shop,’ but it’s important to get users to register quickly so that attackers can’t take over their account by registering for MFA, especially if it’s for a high-value application they don’t use frequently. For new employees, you should make MFA registration part of the onboarding process.

Make MFA easier on employees

MFA is always going to be an extra step, but you can choose MFA options with less friction, like using biometrics in devices or FIDO2 compliant factors such as Feitan or Yubico security keys. Avoid using SMS if possible. Phone-based authentication apps like the Microsoft Authenticator App are an option, and they don’t require a user to hand over control of their personal device. But if you have employees who travel to locations where they may not have connectivity, choose OATH verification codes, which are automatically generated rather than push notifications that are usually convenient but require the user to be online. You can even use automated voice calls: letting users press a button on the phone keypad is less intrusive than giving them a passcode to type in on screen.

Offer a choice of alternative factors so people can pick the one that best suits them. Biometrics are extremely convenient, but some employees may be uncomfortable using their fingerprint or face for corporate sign-ins and may prefer receiving an automated voice call.

Make sure that you include mobile devices in your MFA solution, managing them through Mobile Device Management (MDM), so you can use conditional and contextual factors for additional security.

Avoid making MFA onerous; choose when the extra authentication is needed to protect sensitive data and critical systems rather than applying it to every single interaction. Consider using conditional access policies and Azure AD Identity Protection, which allows for triggering two-step verification based on risk detections, as well as pass-through authentication and single-sign-on (SSO).

If MFA means that a user accessing a non-critical file share or calendar on the corporate network from a known device that has all the current OS and antimalware updates sees fewer challenges—and no longer faces the burden of 90-day password resets—then you can actually improve the user experience with MFA.

Have a support plan

Spend some time planning how you will handle failed sign-ins and account lockouts. Even with training, some failed sign-ins will be legitimate users getting it wrong and you need to make it easy for them to get help.

Similarly, have a plan for lost devices. If a security key is lost, the process for reporting that needs to be easy and blame free, so that employees will notify you immediately so you can expire their sessions and block the security key, and audit the behavior of their account (going back to before they notified you of the loss). Security keys that use biometrics may be a little more expensive, but if they’re lost or stolen, an attacker can’t use them. If possible, make it a simple, automated workflow, using your service desk tools.

You also need to quickly get them connected another way so they can get back to work. Automatically enrolling employees with a second factor can help. Make that second factor convenient enough to use that they’re not unable to do their job, but not so convenient that they keep using it and don’t report the loss: one easy option is allowing one-time bypasses. Similarly, make sure you’re set up to automatically deprovision entitlements and factors when employees change roles or leave the organization.

Measure and monitor

As you deploy MFA, monitor the rollout to see what impact it has on both security and productivity and be prepared to make changes to policies or invest in better hardware to make it successful. Track security metrics for failed login attempts, credential phishing that gets blocked and privilege escalations that are denied.

Your MFA marketing campaign also needs to continue during and after deployment, actively reaching out to staff and asking them to take back in polls or feedback sessions. Start that with the pilot group and continue it once everyone is using MFA.

Even when you ask for it, don’t rely on user feedback to tell you about problems. Check helpdesk tickets, logs, and audit options to see if it’s taking users longer to get into systems, or if they’re postponing key tasks because they’re finding MFA difficult, or if security devices are failing or breaking more than expected. New applications and new teams in the business will also mean that MFA deployment needs to be ongoing, and you’ll need to test software updates to see if they break MFA; you have to make it part of the regular IT process.

Continue to educate users about the importance of MFA, including running phishing training and phishing your own employees (with more training for those who are tricked into clicking through to fake links).

MFA isn’t a switch you flip; it’s part of a move to continuous security and assessment that will take time and commitment to implement. But if you approach it in the right way, it’s also the single most effective step you can take to improve security.

About the authors

Ann Johnson is the Corporate Vice President for Cybersecurity Solutions Group for Microsoft. She is a member of the board of advisors for FS-ISAC (The Financial Services Information Sharing and Analysis Center), an advisory board member for EWF (Executive Women’s Forum on Information Security, Risk Management & Privacy), and an advisory board member for HYPR Corp. Ann recently joined the board of advisors for Cybersecurity Ventures

Christina Morillo is a Senior Program Manager on the Azure Identity Engineering Product team at Microsoft. She is an information security and technology professional with a background in cloud technologies, enterprise security, and identity and access. Christina advocates and is passionate about making technology less scary and more approachable for the masses. When she is not at work, or spending time with her family, you can find her co-leading Women in Security and Privacy’s NYC chapter and supporting others as an advisor and mentor. She lives in New York City with her husband and children.

Learn more

To find out more about Microsoft’s Cybersecurity Solutions, visit the Microsoft Security site, or follow Microsoft Security on Twitter at Microsoft Security Twitter or Microsoft WDSecurity Twitter.

To learn more about Microsoft Azure Identity Management solutions, visit this Microsoft overview page and follow our Identity blog. You can also follow us @AzureAD on Twitter.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How to implement Multi-Factor Authentication (MFA) appeared first on Microsoft Security.

January 2020 Security Updates: CVE-2020-0601

January 14th, 2020 No comments

The January security updates include several Important and Critical security updates. As always, we recommend that customers update their systems as quickly as practical. Details for the full set of updates released today can be found in the Security Update Guide. We believe in Coordinated Vulnerability Disclosure (CVD) as proven industry best practice to address security vulnerabilities. Through a partnership …

January 2020 Security Updates: CVE-2020-0601 Read More »

The post January 2020 Security Updates: CVE-2020-0601 appeared first on Microsoft Security Response Center.

January 2020 security updates are available!

January 14th, 2020 No comments

We have released the January security updates to provide additional protections against malicious attackers. As a best practice, we encourage customers to turn on automatic updates. More information about this month’s security updates can be found in the Security Update Guide. As a reminder, Windows 7 and Windows Server 2008 R2 will be out of …

January 2020 security updates are available! Read More »

The post January 2020 security updates are available! appeared first on Microsoft Security Response Center.

Categories: Uncategorized Tags:

Rethinking cyber scenarios—learning (and training) as you defend

January 14th, 2020 No comments

In two recent posts I discussed with Circadence the increasing importance of gamification for cybersecurity learning and how to get started as a practitioner while being supported by an enterprise learning officer or security team lead. In this third and final post in the series, Keenan and I address more advanced SecOps scenarios that an experienced practitioner would be concerned with understanding. We even show how Circadence and Microsoft help seasoned practitioners defend against some of the most prevalent and advanced attackers we see across industries.

Here are more of Keenan’s insights from our Q&A:

Q: Keenan, thanks for sharing in this digital conversation with me again. I admire your passion for gamified cyber learning. I’d not put the two ideas together, that you can adopt gaming concepts—and consoles—in a way that makes learning the often difficult and evolving subject matter of “cyber” much more fun and impactful. Now that I’ve used Project Ares for a year, it’s hard to imagine NOT having an interactive, gamified platform to help me build and refine cybersecurity concepts and skills. Several friends and colleagues have also registered their teenagers for Circadence’s Project Ares Academy subscriptions to kickstart their learning journey toward a cyber career path. If kids are going to game, let’s point them to something that will build employable skills for the future.

In our last two blogs, we introduced readers to a couple of new ideas:

Now, let’s pivot and focus on practical cyber scenarios (let’s say Tier 1 or Tier 2 defender scenarios)—situations that would likely be directed to experienced cyber professionals to handle. Walk us through some of detail about how Circadence has built SecOps gaming experiences into Project Ares through mission scenarios that are inspired by real cyber incidents pulled from news headlines incorporating today’s most common attack methods such as ransomware, credential theft, and even nation-state attacks?

A: Sure. I’ll start with descriptions of a couple of our foundational missions.

Scenario one: Ransomware—Project Ares offers several mission scenarios that address the cyber kill chain around ransomware. The one I’ll focus on is Mission 10, Operation Crimson Wolf. Acting as a cyber force member working for a transportation company, the user must secure networks so the company can conduct effective port activity. However, the company is in danger as ransomware has encrypted data and a hacker has launched a phishing attack on the network, impacting how and when operators offload ships. The player must stop the ransomware from spreading and attacking other nodes on the network before it’s too late. I love this scenario because 1) it’s realistic, 2) ransomware attacks occur far too often, and 3) it allows the player to engage in a virtual environment to build skills.

Users who engage in this mission learn core competencies like:

  • Computer network defense.
  • Incident response management.
  • Data forensics and handling.

We map all our missions to the NIST/NICE work role framework and Mission 10 touches on the following work roles: System Security Analyst, Cyber Defense Analyst, Cyber Defense Incident Responder, and the Cyber Defense Forensics Analyst.

Image from scenario one: Ransomware

Scenario two: Credential theft—Another mission that’s really engaging is Mission 1, Operation Goatherd. It teaches how credential theft can occur via a brute force attack. In this mission, the user must access the command and control server of a group of hackers to disable a botnet network in use. The botnet is designed to execute a widespread financial scam triggering the collapse of a national bank! The user must scan the command and control server located at myloot.com for running services, identify a vulnerable service, perform a brute force attack to obtain credentials, and then kill the web server acting as the command and control orchestrator.

This scenario is powerful because it asks the player to address the challenge by thinking from an adversary’s perspective. It helps the learner understand how an attacker would execute credential theft (though there are many ways) and gives the learner a different perspective for a well-rounded comprehension of the attack method.

Users who engage in this mission learn core competencies like:

  • Network protocols.
  • Reconnaissance and enumeration.
  • Password cracking and exploration.

The NIST/NICE work role aligned to this mission is a Cyber Operator. Specific tasks this work role must address include:

  • Analyzing target operational architecture for ways to gain access.
  • Conducting network scouting and vulnerability analysis of systems within a network.
  • Detecting exploits against targeted networks.

Image from scenario two: Credential theft

Q: Can you discuss how Project Ares’ learning curriculum addresses critical threats from advanced state or state-backed attackers. While we won’t name governments directly, the point for our readers to understand is that the national and international cybersecurity stage is built around identifying and learning how to combat the tools, tactics, and procedures that threat actors are using in all industries.

A: Here’s a good example.

Scenario three: Election security—In this mission, we deploy in our next release of Project Ares, which now leverages cloud native architecture (running on Microsoft Azure), is Mission 15, Operation Raging Mammoth. It helps a cyber professional protect against an election attack—something we are all too familiar with through recent headlines about election security. As an election security official, the user must monitor voting systems to establish a baseline of normal activity and configurations from which we identify anomalies. The user must detect and report changes to an administrator’s access permissions and/or modifications to voter information.

The NIST/NICE work roles aligned to this mission include professionals training as a Cyber Defense Analyst, Cyber Defense Incident Responder, or Threat/Warning Analyst.

Image from scenario three: Election security

I’ve reviewed some of the specific cyber scenarios a Tier 1 or Tier 2 defender might experience on the job. Now I’d like to share a bit how we build these exercises for our customers.

It really comes down to the professional experiences and detailed research from our Mission and Battle Room design teams at Circadence. Many of them have explicit and long-standing professional experience as on-the-job cyber operators and defenders, as well as cyber professors and teachers at renowned institutions. They really understand what professionals need to learn, how they need to learn, and the most effective ways to learn.

We profile Circadence professionals in the Living Our Mission Blog Series to help interested readers understand the skill and dedication of the people behind Project Ares. By sharing the individual faces behind the solution, we hope current and prospective customers will appreciate Project Ares more knowing that Circadence is building the most relevant learning experiences available to support immersive, gamified learning of today’s cyber professionals.

Learn more

To see Project Ares “in action” visit Circadence and request a demonstration, or speak with your local Microsoft representative. You can also try your hand at it by attending an upcoming Microsoft Ignite: The Tour event, which features a joint Microsoft/Circadence “Into the Breach” capture the flag exercise.

To learn more about how to close the cybersecurity talent gap, read the e-book: CISO essentials: How to optimize recruiting while strengthening cybersecurity. For more information on Microsoft intelligence security solutions, including guidance on Zero Trust, visit Reach the optimal state in your Zero Trust journey.

The post Rethinking cyber scenarios—learning (and training) as you defend appeared first on Microsoft Security.

Announcing the Microsoft Identity Research Project Grant

January 9th, 2020 No comments

We are excited to announce the Microsoft Identity Research Project Grant a new opportunity in partnership with the security community to help protect Microsoft customers. This project grant awards up to $75,000 USD for approved research proposals that improve the security of the Microsoft Identity solutions in new ways for both Consumers (Microsoft Account) and Enterprise (Azure Active Directory).

The post Announcing the Microsoft Identity Research Project Grant appeared first on Microsoft Security Response Center.

Changing the monolith—Part 1: Building alliances for a secure culture

January 9th, 2020 No comments

Any modern security expert can tell you that we’re light years away from the old days when firewalls and antivirus were the only mechanisms of protection against cyberattacks. Cybersecurity has been one of the hot topics of boardroom conversation for the last eight years, and has been rapidly increasing to higher priority due to the size and frequency of data breaches that have been reported across all industries and organizations.

The security conversation has finally been elevated out of the shadows of the IT Department and has moved into the executive and board level spotlights. This has motivated the C-teams of organizations everywhere to start asking hard questions of their Chief Information Officers, Chief Compliance Officers, Privacy Officers, Risk Organizations, and Legal Counsels.

Cybersecurity professionals can either wait until these questions land at their feet, or they can take charge and build relationships with executives and the business side of the organization.

Taking charge of the issue

Professionals fortunate enough to have direct access to the Board of Directors of their organization can also build extremely valuable relationships at the board level as well. As cybersecurity professionals establish lines of communication throughout organizational leadership, they must keep in mind that these leaders, although experts in their respective areas, are not technologists.

The challenge that cybersecurity professionals face is being able to get the non-technical people on board with the culture of change in regards to security. These kinds of changes in culture and thinking can help facilitate the innovation that is needed to decrease the risk of compromise, reputation damage, sanctions against the organization, and potential stock devaluation. So how can one deliver this message of Fear, Uncertainty, and Doubt (FUD) without losing the executive leaders in the technical details or dramatization of the current situation?

Start by addressing the business problem, not the technology.

The answer isn’t as daunting as you might think

The best way to start the conversation with business leaders is to begin by stating the principles of your approach to addressing the problem and the risks of not properly addressing it. It’s important to remember to present the principles and methods in a way that is understandable to non-technical persons.

This may sound challenging at first, but the following examples will give you a good starting point of how to accomplish this:

  • At some point in time, there will be a data breach—Every day we’re up against tens of thousands of “militarized” state-sponsored threat actors who usually know more about organizations and technical infrastructure than we do. This is not a fight we’ll always win, even if we’re able to bring near unlimited resources to the table, which is often rare itself. In any scenario, we must accept some modicum of risk, and cybersecurity is no different. The approach for resolution should involve mitigating the likelihood and severity of a compromise situation when it ultimately does occur.
  • Physical security and cybersecurity are linked—If you have access to physical hardware, there are a myriad of ways to pull data directly from your enterprise network and send it to a dark web repository or other malicious data repository for later decryption and analysis. If you have possession of a laptop or mobile device, and storage encryption hasn’t been implemented, an attacker can forensically image the device fairly easily and make an exact replica to analyze later. By using these or similar examples, you can clearly state that physical security even equals cybersecurity in many cases.
  • You can’t always put a dollar amount on digital trust—Collateral damage in the aftermath of a cyberattack go well beyond dollars and paying attention to cybersecurity and privacy threats demonstrate digital trust to clients, customers, employees, suppliers, vendors, and the general public. Digital trust underpins every digital interaction by measuring and quantifying the expectation that an entity is who or what it claims to be and that it will behave in an expected manner. This can set an organization apart from its competitors.
  • Everything can’t be protected equally; likewise, everything doesn’t have the same business value—Where are the crown jewels and what systems’ failure would create a critical impact on the organizations business? Once identified, the organization has a lot less to worry about and protect. Additionally, one of the core principles should be, “When in doubt, throw it out.” Keeping data longer than it needs to be kept increases the attack surface area and creates liability for the firm to produce large amounts of data during requests for legal discovery. The Data Retention Policy needs to reflect this. Data Retention Policies need to be created with input from the business and General Counsel.
  • Identity is the new perimeter—Additional perimeter-based security appliances will not decrease the chance of compromise. Once identity is compromised, perimeter controls become useless. Operate as if the organization’s network has already been compromised as mentioned in principle #1. Focus the investment on modern authentication, Zero Trust, conditional access, and abnormal user and information behavior detection. Questions to ask now include, what’s happening to users, company data, and devices both inside and outside the firewall. Think about data handling—who has access to what and why and is it within normal business activity parameters?

The culture of change in the organization

If leadership is not on board with the people, process, and technology changes required to fulfill a modern approach to cybersecurity and data protection, any effort put into such a program is a waste of time and money.

You can tell immediately if you’ve done the appropriate amount of marketing to bring cybersecurity and data protection to the forefront of business leaders’ agendas. If the funding and the support for the mission is unavailable, one must ask oneself if the patient, in this case the organization, truly wants to get better.

If, during a company meeting, a CEO declares that “data protection is everyone’s responsibility, including mine,” everyone will recognize the importance of the initiative to the company’s success. Hearing this from the CISO or below does not have the same gravitas.

The most successful programs I’ve seen are those who have been sponsored at the highest levels of the organization and tied to performance. For more information on presenting to the board of directors, watch our CISO Spotlight Episode with Bret Arsenault, Microsoft CISO.

Stayed tuned and stay updated

Stay tuned for “Changing the monolith—Part 2” where I address who you should recruit as you build alliances across the organization, how to build support through business conversations, and what’s next in driving organizational change. In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Changing the monolith—Part 1: Building alliances for a secure culture appeared first on Microsoft Security.

Categories: cybersecurity Tags:

Microsoft 365 helps governments adopt a Zero Trust security model

January 8th, 2020 No comments

For governments to function, the flow of data on a massive scale is required—including sensitive information about critical infrastructure, citizens, and public safety and security. The security of government information systems is subject to constant attempted attacks and in need of a modern approach to cybersecurity.

Microsoft 365 provides best-in-class productivity apps while protecting identities, devices, applications, networks, and data. With Microsoft 365 security services, governments can take confident steps to adopt a Zero Trust security model where all users and devices—both inside and outside the network—are deemed untrustworthy by default and the same security checks are applied to all users, devices, applications, and data.

To learn more, read Government data protection—earning and retaining the public’s trust with Microsoft 365.

The post Microsoft 365 helps governments adopt a Zero Trust security model appeared first on Microsoft Security.

Categories: Microsoft 365, Zero Trust Tags:

Threat hunting in Azure Advanced Threat Protection (ATP)

January 7th, 2020 No comments

As members of Microsoft’s Detection and Response Team (DART), we’ve seen a significant increase in adversaries “living off the land” and using compromised account credentials for malicious purposes. From an investigation standpoint, tracking adversaries using this method is quite difficult as you need to sift through the data to determine whether the activities are being performed by the legitimate user or a bad actor. Credentials can be harvested in numerous ways, including phishing campaigns, Mimikatz, and key loggers.

Recently, DART was called into an engagement where the adversary had a foothold within the on-premises network, which had been gained through compromising cloud credentials. Once the adversary had the credentials, they began their reconnaissance on the network by searching for documents about VPN remote access and other access methods stored on a user’s SharePoint and OneDrive. After the adversary was able to access the network through the company’s VPN, they moved laterally throughout the environment using legitimate user credentials harvested during a phishing campaign.

Once our team was able to determine the initially compromised accounts, we were able to begin the process of tracking the adversary within the on-premises systems. Looking at the initial VPN logs, we identified the starting point for our investigation. Typically, in this kind of investigation, your team would need to dive deeper into individual machine event logs, looking for remote access activities and movements, as well as looking at any domain controller logs that could help highlight the credentials used by the attacker(s).

Luckily for us, this customer had deployed Azure Advanced Threat Protection (ATP) prior to the incident. By having Azure ATP operational prior to an incident, the software had already normalized authentication and identity transactions within the customer network. DART began querying the suspected compromised credentials within Azure ATP, which provided us with a broad swath of authentication-related activities on the network and helped us build an initial timeline of events and activities performed by the adversary, including:

  • Interactive logins (Kerberos and NTLM)
  • Credential validation
  • Resource access
  • SAMR queries
  • DNS queries
  • WMI Remote Code Execution (RCE)
  • Lateral Movement Paths

Azure Advanced Threat Protection

Detect and investigate advanced attacks on-premises and in the cloud.

Get started

This data enabled the team to perform more in-depth analysis on both user and machine level logs for the systems the adversary-controlled account touched. Azure ATP’s ability to identify and investigate suspicious user activities and advanced attack techniques throughout the cyber kill chain enabled our team to completely track the adversary’s movements in less than a day. Without Azure ATP, investigating this incident could have taken weeks—or even months—since the data sources don’t often exist to make this type of rapid response and investigation possible.

Once we were able to track the user throughout the environment, we were able to correlate that data with Microsoft Defender ATP to gain an understanding of the tools used by the adversary throughout their journey. Using the right tools for the job allowed DART to jump start the investigation; identify the compromised accounts, compromised systems, other systems at risk, and the tools being used by the adversaries; and provide the customer with the needed information to recover from the incident faster and get back to business.

Learn more and keep updated

Learn more about how DART helps customers respond to compromises and become cyber-resilient. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Threat hunting in Azure Advanced Threat Protection (ATP) appeared first on Microsoft Security.

CISO series: Lessons learned from the Microsoft SOC—Part 3b: A day in the life

December 23rd, 2019 No comments

The Lessons learned from the Microsoft SOC blog series is designed to share our approach and experience with security operations center (SOC) operations. We share strategies and learnings from our SOC, which protects Microsoft, and our Detection and Response Team (DART), who helps our customers address security incidents. For a visual depiction of our SOC philosophy, download our Minutes Matter poster.

For the next two installments in the series, we’ll take you on a virtual shadow session of a SOC analyst, so you can see how we use security technology. You’ll get to virtually experience a day in the life of these professionals and see how Microsoft security tools support the processes and metrics we discussed earlier. We’ll primarily focus on the experience of the Investigation team (Tier 2) as the Triage team (Tier 1) is a streamlined subset of this process. Threat hunting will be covered separately.

Image of security workers in an office.

General impressions

Newcomers to the facility often remark on how calm and quiet our SOC physical space is. It looks and sounds like a “normal” office with people going about their job in a calm professional manner. This is in sharp contrast to the dramatic moments in TV shows that use operations centers to build tension/drama in a noisy space.

Nature doesn’t have edges

We have learned that the real world is often “messy” and unpredictable, and the SOC tends to reflect that reality. What comes into the SOC doesn’t always fit into the nice neat boxes, but a lot of it follows predictable patterns that have been forged into standard processes, automation, and (in many cases) features of Microsoft tooling.

Routine front door incidents

The most common attack patterns we see are phishing and stolen credentials attacks (or minor variations on them):

  • Phishing email → Host infection → Identity pivot:

Infographic indicating: Phishing email, Host infection, and Identity pivot

  • Stolen credentials → Identity pivot → Host infection:

Infographic indicating: Stolen credentials, Identity pivot, and Host infection

While these aren’t the only ways attackers gain access to organizations, they’re the most prevalent methods mastered by most attackers. Just as martial artists start by mastering basic common blocks, punches, and kicks, SOC analysts and teams must build a strong foundation by learning to respond rapidly to these common attack methods.

As we mentioned earlier in the series, it’s been over two years since network-based detection has been the primary method for detecting an attack. We attribute this primarily to investments that improved our ability to rapidly remediate attacks early with host/email/identity detections. There are also fundamental challenges with network-based detections (they are noisy and have limited native context for filtering true vs. false positives).

Analyst investigation process

Once an analyst settles into the analyst pod on the watch floor for their shift, they start checking the queue of our case management system for incidents (not entirely unlike phone support or help desk analysts would).

While anything might show up in the queue, the process for investigating common front door incidents includes:

  1. Alert appears in the queue—After a threat detection tool detects a likely attack, an incident is automatically created in our case management system. The Mean Time to Acknowledge (MTTA) measurement of SOC responsiveness begins with this timestamp. See Part 1: Organization for more information on key SOC metrics.

Basic threat hunting helps keep a queue clean and tidy

Require a 90 percent true positive rate for alert sources (e.g., detection tools and types) before allowing them to generate incidents in the analyst queue. This quality requirement reduces the volume of false positive alerts, which can lead to frustration and wasted time. To implement, you’ll need to measure and refine the quality of alert sources and create a basic threat hunting process. A basic threat hunting process leverages experienced analysts to comb through alert sources that don’t meet this quality bar to identify interesting alerts that are worth investigating. This review (without requiring full investigation of each one) helps ensure that real incident detections are not lost in the high volume of noisy alerts. It can be a simple part time process, but it does require skilled analysts that can apply their experience to the task.

  1. Own and orient—The analyst on shift begins by taking ownership of the case and reading through the information available in the case management tool. The timestamp for this is the end of the MTTA responsiveness measurement and begins the Mean Time to Remediate (MTTR) measurement.

Experience matters

A SOC is dependent on the knowledge, skills, and expertise of the analysts on the team. The attack operators and malware authors you defend against are often adaptable and skilled humans, so no prescriptive textbook or playbook on response will stay current for very long. We work hard to take good care of our people—giving them time to decompress and learn, recruiting them from diverse backgrounds that can bring fresh perspectives, and creating a career path and shadowing programs that encourage them to learn and grow.

  1. Check out the host—Typically, the first priority is to identify affected endpoints so analysts can rapidly get deep insight. Our SOC relies on the Endpoint Detection and Response (EDR) functionality in Microsoft Defender Advanced Threat Protection (ATP) for this.

Why endpoint is important

Our analysts have a strong preference to start with the endpoint because:

  • Endpoints are involved in most attacks—Malware on an endpoint represents the sole delivery vehicle of most commodity attacks, and most attack operators still rely on malware on at least one endpoint to achieve their objective. We’ve also found the EDR capabilities detect advanced attackers that are “living off the land” (using tools deployed by the enterprise to navigate). The EDR functionality in Microsoft Defender ATP provides visibility into normal behavior that helps detect unusual command lines and process creation events.
  • Endpoint offers powerful insights—Malware and its behavior (whether automated or manual actions) on the endpoint often provides rich detailed insight into the attacker’s identity, skills, capabilities, and intentions, so it’s a key element that our analysts always check for.

Identifying the endpoints affected by this incident is easy for alerts raised by the Microsoft Defender ATP EDR, but may take a few pivots on an email or identity sourced alert, which makes integration between these tools crucial.

  1. Scope out and fill in the timeline—The analyst then builds a full picture and timeline of the related chain of events that led to the alert (which may be an adversary’s attack operation or false alarm positive) by following leads from the first host alert. The analyst travels along the timeline:
  • Backward in time—Track backward to identify the entry point in the environment.
  • Forward in time—Follow leads to any devices/assets an attacker may have accessed (or attempted to access).

Our analysts typically build this picture using the MITRE ATT&CK™ model (though some also adhere to the classic Lockheed Martin Cyber Kill Chain®).

True or false? Art or science?

The process of investigation is partly a science and partly an art. The analyst is ultimately building a storyline of what happened to determine whether this chain of events is the result of a malicious actor (often attempting to mask their actions/nature), a normal business/technical process, an innocent mistake, or something else.

This investigation is a repetitive process. Analysts identify potential leads based on the information in the original report, follow those leads, and evaluate if the results contribute to the investigation.

Analysts often contact users to identify whether they performed an anomalous action intentionally, accidentally, or was not done by them at all.

Running down the leads with automation

Much like analyzing physical evidence in a criminal investigation, cybersecurity investigations involve iteratively digging through potential evidence, which can be tedious work. Another parallel between cybersecurity and traditional forensic investigations is that popular TV and movie depictions are often much more exciting and faster than the real world.

One significant advantage of investigating cyberattacks is that the relevant data is already electronic, making it easier to automate investigation. For many incidents, our SOC takes advantage of security orchestration, automation, and remediation (SOAR) technology to automate investigation (and remediation) of routine incidents. Our SOC relies heavily on the AutoIR functionality in Microsoft Threat Protection tools like Microsoft Defender ATP and Office 365 ATP to reduce analyst workload. In our current configuration, some remediations are fully automatic and some are semi-automatic (where analysts review the automated investigations and propose remediation before approving execution of it).

Document, document, document

As the analyst builds this understanding, they must capture a complete record with their conclusions and reasoning/evidence for future use (case reviews, analyst self-education, re-opening cases that are later linked to active attacks, etc.).

As our analyst develops information on an incident, they capture the common, most relevant details quickly into the case such as:

  • Alert info: Alert links and Alert timeline
  • Machine info: Name and ID
  • User info
  • Event info
  • Detection source
  • Download source
  • File creation info
  • Process creation
  • Installation/Persistence method(s)
  • Network communication
  • Dropped files

Fusion and integration avoid wasting analyst time

Each minute an analyst wastes on manual effort is another minute the attacker has to spread, infect, and do damage during an attack operation. Repetitive manual activity also creates analyst toil, increases frustration, and can drive interest in finding a new job or career.

We learned that several technologies are key to reducing toil (in addition to automation):

  • Fusion—Adversary attack operations frequently trip multiple alerts in multiple tools, and these must be correlated and linked to avoid duplication of effort. Our SOC has found significant value from technologies that automatically find and fuse these alerts together into a single incident. Azure Security Center and Microsoft Threat Protection include these natively.
  • Integration—Few things are more frustrating and time consuming than having to switch consoles and tools to follow a lead (a.k.a., swivel chair analytics). Switching consoles interrupts their thought process and often requires manual tasks to copy/paste information between tools to continue their work. Our analysts are extremely appreciative of the work our engineering teams have done to bring threat intelligence natively into Microsoft’s threat detection tools and link together the consoles for Microsoft Defender ATP, Office 365 ATP, and Azure ATP. They’re also looking forward to (and starting to test) the Microsoft Threat Protection Console and Azure Sentinel updates that will continue to reduce the swivel chair analytics.

Stay tuned for the next segment in the series, where we’ll conclude our investigation, remediate the incident, and take part in some continuous improvement activities.

Learn more

In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

To learn more about SOCs, read previous posts in the Lessons learned from the Microsoft SOC series, including:

Watch the CISO Spotlight Series: Passwordless: What’s It Worth.

Also, see our full CISO series and download our Minutes Matter poster for a visual depiction of our SOC philosophy.

The post CISO series: Lessons learned from the Microsoft SOC—Part 3b: A day in the life appeared first on Microsoft Security.

Mobile threat defense and intelligence are a core part of cyber defense

December 19th, 2019 No comments

The modern workplace is a mobile workplace. Today’s organizations rely on mobility to increase productivity and improve the customer experience. But the proliferation of smartphones and other mobile devices has also expanded the attack surface of roughly 5 billion mobile devices in the world, many used to handle sensitive corporate data. To safeguard company assets, organizations need to augment their global cyber defense strategy with mobile threat intelligence.

When handled and analyzed properly, actionable data holds the key to enabling solid, 360-degree cybersecurity strategies and responses. However, many corporations lack effective tools to collect, analyze, and act on the massive volume of security events that arise daily across their mobile fleet. An international bank recently faced this challenge. By deploying Pradeo Security alongside Microsoft Endpoint Manager and Microsoft Defender Advanced Threat Protection (ATP), the bank was able to harness its mobile data and better protect the company.

Pradeo Security strengthens Microsoft Endpoint Manager Conditional Access policies

In 2017, the Chief Information Security Office (CISO) of an international bank recognized that the company needed to address the risk of data exposure on mobile. Cybercriminals exploit smart phones at the application, network, and OS levels, and infiltrate them through mobile applications 78 percent of the time.1 The General Data Protection Regulation (GDPR) was also scheduled to go into effect the following year. The company needed to better secure its mobile data to safeguard the company and comply with the new privacy regulations.

The company deployed Microsoft Endpoint Manager to gain visibility into the mobile devices accessing corporate resources. Microsoft Endpoint Manager is the recently announced convergence of Microsoft Intune and Configuration Manager functionality and data, plus new intelligent actions, offering seamless, unified endpoint management. Then, to ensure the protection of these corporate resources, the company deployed Pradeo Security Mobile Threat Defense, which is integrated with Microsoft.

Pradeo Security and Microsoft Endpoint Manager work together to apply conditional access policies to each mobile session. Conditional access policies allow the security team to automate access based on the circumstances. For example, if a user tries to gain access using a device that is not managed by Microsoft Endpoint Manager, the user may be forced to enroll the device. Pradeo Security enhances Microsoft Endpoint Manager’s capabilities by providing a clear security status of any mobile devices accessing corporate data, which Microsoft can evaluate for risk. If a smartphone is identified as non-compliant based on the data that Pradeo provides, conditional access policies can be applied.

For example, if the risk is high, the bank could set policies that block access. The highly granular and customizable security policies offered by Pradeo Security gave the CISO more confidence that the mobile fleet was better protected against threats specifically targeting his industry.

Get more details about Pradeo Security for Microsoft Endpoint Manager in this datasheet.

Detect and respond to advanced cyberthreats with Pradeo Security and Microsoft Defender ATP

The bank also connected Pradeo Security to Microsoft Defender ATP in order to automatically feed it with always current mobile security inputs. Microsoft Defender ATP helps enterprises prevent, detect, investigate, and respond to advanced cyberthreats. Pradeo Security enriches Microsoft Defender ATP with mobile security intelligence. Immediately, the bank was able to see information on the latest threats targeting their mobile fleet. Only a few weeks later, there was enough data in the Microsoft platform to draw trends and get a clear understanding of the company’s mobile threat environment.

Pradeo relies on a network of millions of devices (iOS and Android) across the globe to collect security events related to the most current mobile threats. Pradeo leverages machine learning mechanisms to distill and classify billions of raw and anonymous security facts into actionable mobile threat intelligence.

Today, this bank’s mobile ecosystem entirely relies on Pradeo and Microsoft, as its security team finds it to be the most cost-effective combination when it comes to mobile device management, protection, and intelligence.

About Pradeo

Pradeo is a global leader of mobile security and a member of the Microsoft Intelligent Security Association (MISA). It offers services to protect the data handled on mobile devices and applications, and tools to collect, process, and get value out of mobile security events.

Pradeo’s cutting-edge technology has been recognized as one of the most advanced mobile security technologies by Gartner, IDC, and Frost & Sullivan. It provides a reliable detection of mobile threats to prevent breaches and reinforce compliance with data privacy regulations.

For more details, contact Pradeo.

Note: Users must be entitled separately to Pradeo and Microsoft licenses as appropriate.

Learn more

To learn more about MISA, visit the MISA webpage. Also, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

Microsoft Endpoint Manager

Transformative management and security that meets you where you are and helps you move to the cloud.

Get started

12019 Mobile Security Report, Pradeo Lab

The post Mobile threat defense and intelligence are a core part of cyber defense appeared first on Microsoft Security.

Data science for cybersecurity: A probabilistic time series model for detecting RDP inbound brute force attacks

December 18th, 2019 No comments

Computers with Windows Remote Desktop Protocol (RDP) exposed to the internet are an attractive target for adversaries because they present a simple and effective way to gain access to a network. Brute forcing RDP, a secure network communications protocol that provides remote access over port 3389, does not require a high level of expertise or the use of exploits; attackers can utilize many off-the-shelf tools to scan the internet for potential victims and leverage similar such tools for conducting the brute force attack.

Attackers target RDP servers that use weak passwords and are without multi-factor authentication, virtual private networks (VPNs), and other security protections. Through RDP brute force, threat actor groups can gain access to target machines and conduct many follow-on activities like ransomware and coin mining operations.

In a brute force attack, adversaries attempt to sign in to an account by effectively using one or more trial-and-error methods. Many failed sign-ins occurring over very short time frequencies, typically minutes or even seconds, are usually associated with these attacks. A brute force attack might also involve adversaries attempting to access one or more accounts using valid usernames that were obtained from credential theft or using common usernames like “administrator”. The same holds for password combinations. In detecting RDP brute force attacks, we focus on the source IP address and username, as password data is not available.

In the Windows operating system, whenever an attempted sign-in fails for a local machine, Event Tracing for Windows (ETW) registers Event ID 4625 with the associated username. Meanwhile, source IP addresses connected to RDP can be accessed; this information is very useful in assessing if a machine is under brute force attack. Using this information in combination with Event ID 4624 for non-server Windows machines can shed light on which sign-in sessions were successfully created and can further help in detecting if a local machine has been compromised.

In this blog we’ll present a study and a detection logic that uses these signals. This data science-driven approach to detecting RDP brute force attacks has proven valuable in detecting human adversary activity through Microsoft Threat Experts, the managed threat hunting service in Microsoft Defender Advanced Threat Protection. This work is an example of how the close collaboration between data scientists and threat hunters results in protection for customers against real-world threats.

Insights into brute force attacks

Observing a sudden, relatively large count of Event ID 4625 associated with RDP network connections might be rare, but it does not necessarily imply that a machine is under attack. For example, a script that performs the following actions would look suspicious looking at a time series of counts of failed sign-in but is most likely not malicious:

  • uses an expired password
  • retries sign-in attempts every N-minutes with different usernames
  • over a public IP address within a range owned by the enterprise

In contrast, behavior that includes the following is indicative of an attack:

  • extreme counts of failed sign-ins from many unknown usernames
  • never previously successfully authenticated
  • from multiple RDP connections
  • from new source IP addresses

Understanding the context of failed sign-ins and inbound connections is key to discriminating between true positive (TP) and false positive (FP) brute force attacks, especially if the goal is to automatically raise only high-precision alerts to the appropriate recipients, as we do in Microsoft Defender ATP.

We analyzed several months’ worth of data to mine insights into the types of RDP brute force attacks occurring across Microsoft Defender ATP customers. Out of about 45,000 machines that had both RDP public IP connections and at least 1 network failed sign-in, we discovered that, on average, several hundred machines per day had high probability of undergoing one or more RDP brute force attack attempts. Of the subpopulation of machines with detected brute force attacks, the attacks lasted 2-3 days on average, with about 90% of cases lasting for 1 week or less, and less than 5% lasting for 2 weeks or more.

Figure 1: Empirical distribution in number of days per machine where we observed 1 or more brute force attacks

As discussed in numerous other studies [1], large counts of failed sign-ins are often associated with brute force attacks. Looking at the count of daily failed sign-ins, 90% of cases exceeded 10 attempts, with a median larger than 60. In addition, these unusual daily counts had high positive correlation with extreme counts in shorter time windows (see Figure 2). In fact, the number of extreme failed sign-ins per day typically occurred under 2 hours, with about 40% failing in under 30 minutes.

Figure 2: Count of daily and maximum hourly network failed sign-ins for a local machine under brute force attack

While a detection logic based on thresholding the count of failed sign-ins during daily or finer grain time window can detect many brute force attacks, this will likely produce too many false positives. Worse, relying on just this will yield false negatives, missing successful enterprise compromises: our analysis revealed several instances where brute force attacks generated less than 5-10 failed attempts at a daily granularity but often persisted for many days, thereby avoiding extreme counts at any point in time. For such a brute force attack, thresholding the cumulative number of failed sign-ins across time could be more useful, as depicted in Figure 3.

Figure 3: Daily and cumulative failed network sign-in

Looking at counts of network failed sign-ins provides a useful but incomplete picture of RDP brute force attacks. This can be further augmented with additional information on the failed sign-in, such as the failure reason, time of day, and day of week, as well as the username itself. An especially strong signal is the source IP of the inbound RDP connection. Knowing if the external IP has a high reputation of abuse, as can be looked up on sites like https://www.abuseipdb.com/, can directly confirm if an IP is a part of an active brute force.

Unfortunately, not all IP addresses have a history of abuse; in addition, it can be expensive to retrieve information about many external IP addresses on demand. Maintaining a list of suspicious IPs is an option, but relying on this can result in false negatives as, inevitably, new IPs continually occur, particularly with the adoption of cloud computing and ease of spinning up virtual machines. A generic signal that can augment failed sign-in and user information is counting distinct RDP connections from external IP addresses. Again, extreme values occurring at a given time or cumulated over time can be an indicator of attack.

Figure 4 shows histograms (i.e., counts put into discrete bins) of daily counts of RDP public connections per machine that occurred for an example enterprise with known brute force attacks. It’s evident that normal machines have a lower probability of larger counts compared to machines attacked.

Figure 4: Histograms of daily count of RDP inbound across machines for an example enterprise

Given that some enterprises have machines under brute force attack daily, the priority may be to focus on machines that have been compromised, defined by a first successful sign-in following failed attempts from suspicious source IP addresses or unusual usernames. In Windows logs, Event ID 4624 can be leveraged to measure successful sign-in events for local machine in combination with failed sign-ins (Event ID 4625).

Out of the hundreds of machines with RDP brute force attacks detected in our analysis, we found that about .08% were compromised. Furthermore, across all enterprises analyzed over several months, on average about 1 machine was detected with high probability of being compromised resulting from an RDP brute force attack every 3-4 days. Figure 5 shows a bubble chart of the average abuse score of external IPs associated with RDP brute force attacks that successfully compromised machines. The size of the bubbles is determined by the count of distinct machines across the enterprises analyzed having a network connection from each IP. While there is diversity in the origin of the source IPs, Netherlands, Russia, and the United Kingdom have a larger concentration of inbound RDP connections from high-abuse IP.

Figure 5: Bubble chart of IP abuse score versus counts of machine with inbound RDP

A key takeaway from our analysis is that successful brute force attempts are not uncommon; therefore, it’s critical to monitor at least the suspicious connections and unusual failed sign-ins that result in authenticated sign-in events. In the following sections we describe a methodology to do this. This methodology was leveraged by Microsoft Threat Experts to augment threat hunting and resulted in new targeted attack notifications.

Combining many relevant signals

As discussed earlier (with the example of scripts connecting via RDP using outdated passwords yielding failed sign-ins), simply relying on thresholding failed attempts per machine for detecting brute force attacks can be noisy and may result in many false positives. A better strategy is to utilize many contextually relevant signals, such as:

  • the timing, type, and count of failed sign-in
  • username history
  • type and frequency of network connections
  • first-time username from a new source machine with a successful sign-in

This can be even further extended to include indicators of attack associated with brute force, such as port scanning.

Combining multiple signals along the attack chain has been proposed and shown promising results [2]. We considered the following signals in detecting RDP inbound brute force attacks per machine:

  • hour of day and day of week of failed sign-in and RDP connections
  • timing of successful sign-in following failed attempts
  • Event ID 4625 login type (filtered to network and remote interactive)
  • Event ID 4625 failure reason (filtered to %%2308, %%2312, %%2313)
  • cumulative count of distinct username that failed to sign in without success
  • count (and cumulative count) of failed sign-ins
  • count (and cumulative count) of RDP inbound external IP
  • count of other machines having RDP inbound connections from one or more of the same IP

Unsupervised probabilistic time series anomaly detection

For many cybersecurity problems, including detecting brute force attacks, previously labeled data is not usually available. Thus, training a supervised learning model is not feasible. This is where unsupervised learning is helpful, enabling one to discover and quantify unknown behaviors when examples are too sparse. Given that several of the signals we consider for modeling RDP brute force attacks are inherently dependent on values observed over time (for example, daily counts of failed sign-ins and counts of inbound connections), time series models are particularly beneficial. Specifically, time series anomaly detection naturally provides a logical framework to quantify uncertainty in modeling temporal changes in data and produce probabilities that then can be ranked and thresholded to control a desirable false positive rate.

Time series anomaly detection captures the temporal dynamics of signals and accurately quantifies the probability of observing values at any point in time under normal operating conditions. More formally, if we introduce the notation Y(t) to denote the signals taking on values at time t, then we build a model to compute reliable estimates of the probability of Y(t) exceeding observed values given all known and relevant information, represented by P[y(t)], sometimes called an anomaly score. Given a false positive tolerance rate r (e.g., .1% or 1 out of 10,000 per time), for each time t, values y*(t) satisfying P[y*(t)] < r would be detected as anomalous. Assuming the right signals reflecting the relevant behaviors of the type of attacks are chosen, then the idea is simple: the lowest anomaly scores occurring per time will be likely associated with the highest likelihood of real threats.

For example, looking back at Figure 2, the time series of daily count of failed sign-ins occurring on the brute force attack day 8/4/2019 had extreme values that would be associated with an empirical probability of about .03% out of all machine and days with at least 1 failed network sign-in for the enterprise.

As discussed earlier, applying anomaly detection to 1 or a few signals to detect real attacks can yield too many false positives. To mitigate this, we combined anomaly scores across eight signals we selected to model RDP brute force attack patterns. The details of our solution are included in the Appendix, but in summary, our methodology involves:

  • updating statistical discrete time series models sequentially for each signal, capturing time of day, day of week, and both point and cumulative effects
  • combining anomaly scores using an approach that yields accurate probability estimates, and
  • ranking the top N anomalies per day to control a desired number of false positives

Our approach to time series anomaly detection is computationally efficient, automatically learns how to update probabilities and adapt to changes in data.

As we describe in the next section, this approach has yielded successful attack detection at high precision.

Protecting customers from real-word RDP brute force attacks through Microsoft Threat Experts

The proposed time series anomaly detection model was deployed and utilized by Microsoft Threat Experts to detect RDP brute force attacks during threat hunting activities. A list that ranks machines across enterprises with the lowest anomaly scores (indicating the likelihood of observing a value at least as large under expected conditions in all signals considered) is updated and reviewed every day. See Table 1 for an example.

Table 1: Sample ranking of detected RDP inbound brute force attacks

For each machine with detection of a probable brute force attack, each instance is assigned TP, FP, or unknown. Each TP is then assigned priority based on the severity of the attack. For high-priority TP, a targeted attack notification is sent to the associated organization with details about the active brute force attack and recommendations for mitigating the threat; otherwise the machine is closely monitored until more information is available.

We also added an extra capability to our anomaly detection: automatically sending targeted attack notifications about RDP brute force attacks, in many cases before the attack succeeds or before the actor is able to conduct further malicious activities. Looking at the most recent sample of about two weeks of graded detections, the average precision per day (i.e., true positive rate) is approximately 93.7% at a conservative false positive rate of 1%.

In conclusion, based on our careful selection of signals found to be highly associated with RDP brute force attacks, we demonstrated that proper application of time series anomaly detection can be very accurate in identifying real threats. We have filed a patent application for this probabilistic time series model for detecting RDP inbound brute force attacks. In addition, we are working on integrating this capability into Microsoft Defender ATP’s endpoint and detection response capabilities so that the detection logic can raise alerts on RDP brute force attacks in real-time.

Monitoring suspicious activity in failed sign-in and network connections should be taken seriously—a real-time anomaly detection capable of self-updating with the changing dynamics in a network can indeed provide a sustainable solution. While Microsoft Defender ATP already has many anomaly detection capabilities integrated into its EDR capabilities, we will continue to enhance these detections to cover more security scenarios. Through data science, we will continue to combine robust statistical and machine learning approaches with threat expertise and intelligence to deliver industry-leading protection to our customers.

 

 

Cole Sodja, Justin Carroll, Joshua Neil
Microsoft Defender ATP Research Team

 

 

Appendix 1: Models formulation

We utilize hierarchical zero-adjusted negative binomial dynamic models to capture the characteristics of the highly discrete count time series. Specifically, as shown in Figure 2, it’s expected that most of the time there won’t be failed sign-ins for valid credentials on a local machine; hence, there are excess zeros that would not be explained by standard probability distributions such as the negative binomial. In addition, the variance of non-zero counts is often much larger than the mean, where for example, valid scripts connecting via RDP can generate counts in the 20s or more over several minutes because of an outdated password. Moreover, given a combination of multiple users or scripts connecting to shared machines at the same time, this can generate more extreme counts at higher quantiles resulting in heavier tails, as seen in Figure 6.

Figure 6: Daily count of network failed sign-in for a machine with no brute force attack

Parametric discrete location/scale distributions do not generate well-calibrated p-values for rare time series, as seen in Figure 6, and thus if used to detect anomalies can result in too many FPs when looking across many machines at high time frequencies. To overcome this challenge dealing with the sparse time series of counts of failed sign-in and RDP inbound public connections we specify a mixture model, where, based on our analysis, a zero-inflated two-component negative binomial distribution was adequate.

Our formulation is based on thresholding values that determine when to transition to a distribution with larger location and/or scale as given in Equation 1. Hierarchical priors are given from empirical estimates of the sample moments across machines using about 1 month of data.

Equation 1: Zero-adjusted negative binomial threshold model

Negative binomial distribution (NB):

To our knowledge, this formulation does not yield a conjugate prior, and so directly computing probabilities from the posterior predicted density is not feasible. Instead, anomaly scores are generated based on drawing samples from all distributions and then computing the empirical right-tail p-value.

Updating parameters is done based on applying exponential smoothing. To avoid outliers skewing estimates, such as machines under brute force or other attacks, trimming is applied to sample from the distribution at a specified false positive rate, which was set to .1% for our study. Algorithm 1 outlines the logic.

The smoothing parameters were learned based on maximum likelihood estimation and then fixed during each new sequential update. To induce further uncertainty, bootstrapping across machines is done to produce a histogram of smoothing weights, and samples are drawn in accordance to their frequency. We found that weights concentrated away from 0 vary between .06% and 8% for over 90% of machines, thus leading to slow changes in the parameters. An extension using adaptive forgetting factors will be considered in future work to automatically learn how to correct smoothing in real time.

Algorithm 1: Updating model parameters real-time

Appendix 2: Fisher Combination

For a given device, for each signal that exists a score is computed defined as a p-value, where lower values are associated with higher likelihood of being an anomaly. Then the p-values are combined to yield a joint score across all signals based on using the Fisher p-value combination method as follows:

The use of Fisher’s test applied to anomaly scores produces a scalable solution that yields interpretable probabilities that thus can be controlled to achieve a desired false positive rate. This has even been applied in a cybersecurity context. [3]

 

 

[1] Najafabadi et al, Machine Learning for Detecting Brute Force Attacks at the Network Level, 2014 IEEE 14th International Conference on Bioinformatics and Bioengineering
[2] Sexton et al, Attack chain detection, Statistical Analysis and Data Mining, 2015
[3] Heard, Combining Weak Statistical Evidence in Cyber Security, Intelligent Data Analysis XIV, 2015

The post Data science for cybersecurity: A probabilistic time series model for detecting RDP inbound brute force attacks appeared first on Microsoft Security.