Archive

Archive for the ‘Cybersecurity deployment’ Category

Microsoft Security: Use baseline default tools to accelerate your security career

September 14th, 2020 No comments

I wrote a series of blogs last year on how gamified learning through cyber ranges can create more realistic and impactful cybersecurity learning experiences and help attract tomorrow’s security workforce. With the global talent shortage in this field, we need to work harder to bring people into the field. This blog is for new cyber professionals or perhaps younger aspirants considering getting into cyber. From an employee’s perspective, it can seem daunting to know where to start, especially when you’re entering an organization with established technology investments, priorities, and practices. Having come to this field later in my career than others, I say from experience that we need to do a better job collectively in providing realistic and interesting role-based learning, paths toward the right certifications and endorsements, and more definitive opportunities to advance one’s career.

I’m still a big fan of gamified learning, but if gaming isn’t your thing, then another way to acquire important baseline learning is to look at simpler, more proactive management tools that up-level different tasks and make your work more efficient. Microsoft has recently released two important cloud security posture management tools that can help a newer employee quickly grasp basic yet critically important security concepts AND show immediate value to your employer. They’re intuitive to learn and deserve more attention.  I’m talking about Azure Security Defaults and Microsoft Secure Score (also including Azure Secure Score). While tools like these don’t typically roll off the tongue, and your experience won’t grab you like an immersive gaming UI, their purpose-built capabilities that focus on commonly-accepted cyber hygiene best practices reinforce solid foundational practices that are no less important than SecOps, incident response, or forensics and hunting. Learning how to use these tools can make you a champion and influencer, and we encourage you to learn more below. These capabilities are also built directly into our larger Azure and M365 services, so by using built-in tools, you’ll help your organization maximize its investments in our technologies and help save money and reduce complexity in your environment.

Azure Security Defaults is named for what it does—setting often overlooked defaults. With one click, you automatically enable several foundational security controls that if left unaddressed are convenient and time-tested targets for attackers to go after your organization. One question that I frequently receive is why Microsoft doesn’t simply pre-configure these settings by default and force customers to turn them off. Several large, high-threat customers have asked specifically that we do that. It’s tempting, but until or unless we make such a move, this is a great self-service add-on. As explained in this blog, ASD does the following:

  • Requires all users to register for Azure Multi-Factor Authentication.
  • Requires admins to perform MFA.
  • Blocks legacy authentication protocols.
  • Requires users to perform MFA when necessary.
  • Protects privileged activities to access the Azure Portal.

A recent important addition to ASD is that Microsoft announced on August 12th that ASD is now also available through Azure Security Center. This is an important and beneficial addition in that it adds another opportunity for your IT organization—whether identity and access management, or security operations—to implement the defaults. I’ve noticed on several occasions when briefing or providing a demo on Azure Security Center to a CISO team that a challenge in effectively using this service may come down to organizational issues, specifically, Who OWNS it?  Is ASC a CISO tool? Regardless of who may own the responsibility, we want to provide the capability upfront.

MICROSOFT SECURE SCORE is a relatively new feature that is designed to quantify your security posture based on how you configure your Microsoft resources. What’s cool and impactful about it is that it provides in a convenient top-down meu approach the relative approach your organization has taken compared (anonymously) with your industry segment’s peers (given in many cases similar reference architectures), and provides clear recommendations for what you can do to improve your score. From a Microsoft perspective, this is what we’d say all carrot and no stick. Though as covered above we provide Azure Security Defaults, customers are still on point to make a proactive decision to implement controls based on your particular work culture, compliance requirements, priorities, and business needs. Take a look at how it works:

This convenient landing page provides an all-up view into the current state of your organization’s security posture, with specific recommendations to improve certain configuration settings based on an art-of-the-possible. In this demo example, if you were to turn enable every security control to its highest level, your score would be 124, as opposed to the current score of 32, for a percentage of 25.81. Looking to the right of the screen, you get a sense of comparison against peer organizations. You can further break down your score by categories such as identity, data, device, apps, and infrastructure; this in turn gives a security or compliance team the opportunity to collaborate with hands-on teams that control those specific resources and who might be operating in silos, not necessarily focused on security postures of their counterparts.

An image of Microsoft Secure Score.

 

Azure Secure Score

You’ll also find Secure Score in the Azure Security Center blade where it provides recommendations front and center, and a color-coded circular graph on important hybrid infrastructure configurations and hygiene.

An image of Secure Score in the Azure Security Center.

Drilling deeper, here we see a variety of recommendations to address specific findings.  For example, the top line item is advice to ‘remediate vulnerabilities’, indicating that 35 of 59 resources that ASC is monitoring are in some way not optimized for security. optimized for security.

An image of variety of recommendations to address specific findings.

Going a level further into the ‘secure management ports’ finding, we see a sub-heading list of actions you can take specific to these resources’ settings. Fortunately, in this case, the administrator has addressed previously-discovered findings, leaving just three to-do’s under the third subheading. For added convenience, the red/green color-coding on the far right draws your attention.

An image of the ‘secure management ports’ finding.

Clicking on the third item above shows you a description of what ASC has found, along with remediation steps.  You have two options to remediate:  more broadly enable and require ‘just in time’ VM access; or, manually enable JIT for each resource. Again, Microsoft wants to incentivize and make it easier for your organization to take more holisitic and proactive steps across your resources such as enabling important settings by default; but we in no way penalize you for the security settings that you implement.

An image of a description of what ASC has found, along with remediation steps.

To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Microsoft Security: Use baseline default tools to accelerate your security career appeared first on Microsoft Security.

Microsoft Security: What cybersecurity skills do I need to become a CISO?

August 31st, 2020 No comments

Build the business skills you need to advance to Chief Information Security Officer

For many cybersecurity professionals, the ultimate career goal is to land a chief information security officer (CISO) job. A CISO is an executive-level position responsible for cyber risk management and operations. But cybersecurity is transforming. Today, a good CISO also must have strong communication skills and a deep understanding of the business. To gain the necessary experience to be considered for a CISO job, you need to understand how the role is evolving and the skills required to excel.

Long before I became a Security Advisor at Microsoft, I started my career as an IT System Administrator. Over time I learned security and worked my way up to CISO and, have served as a CISO in a variety of companies and industries. I’ve mentored several people interested in accelerating their careers in cybersecurity, and one of the biggest mistakes that you can make in your career in IT and Security is ignoring businesspeople. The more you advance, the more you will need to understand and work with the business. In this blog, I’ll provide tips for helping you get more comfortable in that role.

From technologist and guardian to strategist and advisor

As organizations digitize their products, services, and operations to take advantage of the cloud, their ability to effectively leverage technology has become integral to their success. It has also created more opportunities for cybercriminals. Companies of all sizes have been forced to pay fines, suffered reputational harm, and expended significant resources recovering from an attack. A cyber incident isn’t just a technology risk; it’s a business risk. When making decisions, boards and executive teams now need to evaluate the likelihood of a data breach in addition to financial loss or operational risks. A good CISO helps them do this.

According to research by Deloitte, there are four facets of a CISO: the technologist, the guardian, the strategist, and the advisor. You are probably already familiar with the technologist and guardian roles. As a technologist, the CISO is responsible for guiding the deployment and management of security technology and standards. In the guardian role, the CISO monitors and adjusts programs and controls to continuously improve security.

But technical controls and standards will not eliminate cyberattacks and the CISO does not have control over all the decisions that increase the likelihood of a breach. Therefore the roles of strategist and advisor have taken on greater importance. As a strategist, the CISO needs to align security with business strategy to determine how security investments can bring value to the organization. As an advisor, the CISO helps business owners and the executive team understand cybersecurity risks so that they can make informed decisions. To excel at these roles, it’s important to get knowledgeable about the business, understand risk management, and improve your communication skills.

A graphic showing how to understand risk management, and improve your communication skills.

Acquiring the skills to become a good strategist and advisor

If you are already in the cybersecurity profession and interested in growing into a CISO role, you are probably most comfortable with the technologist and guardian roles. You can elevate your technical skills by trying to get experience and certifications in a variety of areas, so that you understand threat analysis, threat hunting, compliance, ethical hacking, and system auditing, but also find time to work on the following leadership skills.

  • Understand the business: The most important step you can take to prepare yourself for an executive-level role is to learn to think like a businessperson. Who are your customers? What are the big opportunities and challenges in your industry? What makes your company unique? What are its weaknesses? What business strategies drive your organization? Pay attention to corporate communications and annual reports to discover what leadership prioritizes and why they have made certain decisions. Read articles about your industry to get a broader perspective about the business environment and how your company fits in. This research will help you make smarter decisions about how to allocate limited resources to protect company assets. It will also help you frame your arguments in a way the business can hear. For example, if you want to convince your organization to upgrade the firewall, they will be more convinced if you can explain how a security incident will affect the company’s relationship with customers or investors.
  • Learn risk management: Smart companies routinely take strategic risks to advance their goals. Businesses seize opportunities to launch new products or acquire a competitor that will make them more valuable in the market. But these decisions can result in failure or huge losses. They can also put the company at risk of a cyberattack.Risk management is a discipline that seeks to understand the upsides and downsides of action and eliminate or mitigate risks if possible. By comparing the likelihood of various options, the return on investment if the venture is successful, and the potential loss if it fails, managers can make informed decisions. CISOs help identify and quantify the cybersecurity risks that should be considered alongside financial and operational risks.
  • Improve your communication skills: To be a good advisor and strategist, you will need to communicate effectively with people with a variety of agendas and backgrounds. One day you’ll need to coach a very technical member of your team, the next you may need to participate in a business decision at the executive level or even be asked to present to the board of directors.A communication plan can help you refine your messages for your audience. To begin practicing these skills now, try to understand the goals of the people you talk to on a regular basis. What are their obstacles? Can you frame security communications in terms that will help them overcome those challenges? Take a moment to put yourself in someone else’s shoes before meetings, hallway conversations, emails, and chats. It can make a real difference!

A good communication plan delivers targeted security messages:A chart showing a good communication plan.
In recent years, the role of the CISOs has been elevated to a senior executive that the board counts on for strategic security advice. In fact, we should rename the position, Chief Influencer Security Officer! Building leadership skills like risk management and communication will help you step into this increasingly important role.

As you embark on the career journey of CISO, it is always good to get a perspective from other CISOs in the Industry and lessons they have learned.   Please feel free to listen to the podcast on my journey from System Administrator to CISO and watch our CISO spotlight episodes where our Microsoft CISO talks about how to present to the board of directors along with other tips and lessons learned.

To learn more about Microsoft Security solutions visit our website.  Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

 

The post Microsoft Security: What cybersecurity skills do I need to become a CISO? appeared first on Microsoft Security.

Feeling fatigued? Cloud-based SIEM relieves security team burnout

June 24th, 2020 No comments

Most CISOs and CSOs are worried that a growing volume of alerts is causing burnout among their teams, according to new research from IDG. You can learn about additional challenges to security operations teams by reading the IDG report SIEM Shift: How the Cloud is Transforming Security Operations.

In terms of SIEM-related challenges, 42 percent of respondents cited alert fatigue, second only to capacity issues (45 percent). Perhaps more worrisome is the fallout from dealing with voluminous alerts, including longer response times, more requests for additional staffing, and missed threats.

“There are admittedly a lot of dead ends that are being chased,” said the senior principal architect from a financial services firm. “You don’t want to ignore things by clicking them off and I’ve seen that people do that.”

Yet, there’s also evidence that companies with cloud-based SIEM solutions like Azure Sentinel, a cloud-native SIEM that leverages artificial intelligence (AI) and threat intelligence based on decades of Microsoft security experience, are less likely to feel these pains than their on-premises counterparts.

The effects of alert fatigue on IT staff

An image of the effects of alert fatigue on IT staff.
In fact, the CISO of an electronics company cited improved alert management as among the primary motivations for shifting to cloud-based SIEM.

“Common drivers were lack of internal knowledge, overall data volumes, and the need to have correlated, aggregated alerts that boil up to what are the most important things we should be looking at,” he said. “Simply said, we needed a single pane of glass.”

Higher levels of intelligence

Aggregation and correlation with a cloud SIEM solution allow organizations to become more proactive with their security strategies.

“We gained a lot [in terms of] the event aggregation, consolidation, and risk rating of events,” said the CISO, adding that threat correlation enabled a whole new level of SOC intelligence so they could get ahead of triage work.

Another way of putting it: “Aggregated intelligence,” according to the head of architecture, security, and privacy for a digital services provider. He suggests companies can only gain deep analysis of threats and vulnerabilities with the cloud.

“You need the cloud version because the vast amount of data that is required is only available, stored, and processed in the cloud,” he said. “If it’s onsite, you can hit the most targeted use cases, but you cannot have that aggregated intelligence that will help you prevent really big, incremental strategic attacks.”

Furthermore, SIEM solutions born in the cloud take advantage of native integrations to speed these correlations. In addition, they often use automation and AI and machine learning technology to power real-time threat detection, protection, and response—reducing alert fatigue and freeing up security teams for more strategic work.

“Babysitting an on-prem SIEM and addressing the myriad of alerts that it generates is a very tactical issue,” said Bob Bragdon, Senior Vice President and Publisher, CSO.

“One of the challenges that security organizations face is getting actionable intelligence out of all their security investments,” Bragdon said. “With a move to a cloud-based SIEM, enterprises can redirect resources that were invested to support an on-prem SIEM to other more strategic or higher-priority tasks.”

Learn about other areas where on-premises and cloud-based SIEM like Azure Sentinel measure up by reading the IDG report SIEM Shift: How the Cloud is Transforming Security Operations.

An image of a report titled "SIEM Shift: How the Cloud is Transforming Security Operations.

For more information about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on Twitter (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Feeling fatigued? Cloud-based SIEM relieves security team burnout appeared first on Microsoft Security.

Feeling fatigued? Cloud-based SIEM relieves security team burnout

June 24th, 2020 No comments

Most CISOs and CSOs are worried that a growing volume of alerts is causing burnout among their teams, according to new research from IDG. You can learn about additional challenges to security operations teams by reading the IDG report SIEM Shift: How the Cloud is Transforming Security Operations.

In terms of SIEM-related challenges, 42 percent of respondents cited alert fatigue, second only to capacity issues (45 percent). Perhaps more worrisome is the fallout from dealing with voluminous alerts, including longer response times, more requests for additional staffing, and missed threats.

“There are admittedly a lot of dead ends that are being chased,” said the senior principal architect from a financial services firm. “You don’t want to ignore things by clicking them off and I’ve seen that people do that.”

Yet, there’s also evidence that companies with cloud-based SIEM solutions like Azure Sentinel, a cloud-native SIEM that leverages artificial intelligence (AI) and threat intelligence based on decades of Microsoft security experience, are less likely to feel these pains than their on-premises counterparts.

The effects of alert fatigue on IT staff

An image of the effects of alert fatigue on IT staff.
In fact, the CISO of an electronics company cited improved alert management as among the primary motivations for shifting to cloud-based SIEM.

“Common drivers were lack of internal knowledge, overall data volumes, and the need to have correlated, aggregated alerts that boil up to what are the most important things we should be looking at,” he said. “Simply said, we needed a single pane of glass.”

Higher levels of intelligence

Aggregation and correlation with a cloud SIEM solution allow organizations to become more proactive with their security strategies.

“We gained a lot [in terms of] the event aggregation, consolidation, and risk rating of events,” said the CISO, adding that threat correlation enabled a whole new level of SOC intelligence so they could get ahead of triage work.

Another way of putting it: “Aggregated intelligence,” according to the head of architecture, security, and privacy for a digital services provider. He suggests companies can only gain deep analysis of threats and vulnerabilities with the cloud.

“You need the cloud version because the vast amount of data that is required is only available, stored, and processed in the cloud,” he said. “If it’s onsite, you can hit the most targeted use cases, but you cannot have that aggregated intelligence that will help you prevent really big, incremental strategic attacks.”

Furthermore, SIEM solutions born in the cloud take advantage of native integrations to speed these correlations. In addition, they often use automation and AI and machine learning technology to power real-time threat detection, protection, and response—reducing alert fatigue and freeing up security teams for more strategic work.

“Babysitting an on-prem SIEM and addressing the myriad of alerts that it generates is a very tactical issue,” said Bob Bragdon, Senior Vice President and Publisher, CSO.

“One of the challenges that security organizations face is getting actionable intelligence out of all their security investments,” Bragdon said. “With a move to a cloud-based SIEM, enterprises can redirect resources that were invested to support an on-prem SIEM to other more strategic or higher-priority tasks.”

Learn about other areas where on-premises and cloud-based SIEM like Azure Sentinel measure up by reading the IDG report SIEM Shift: How the Cloud is Transforming Security Operations.

An image of a report titled "SIEM Shift: How the Cloud is Transforming Security Operations.

For more information about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on Twitter (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Feeling fatigued? Cloud-based SIEM relieves security team burnout appeared first on Microsoft Security.

Mitigating vulnerabilities in endpoint network stacks

May 4th, 2020 No comments

The skyrocketing demand for tools that enable real-time collaboration, remote desktops for accessing company information, and other services that enable remote work underlines the tremendous importance of building and shipping secure products and services. While this is magnified as organizations are forced to adapt to the new environment created by the global crisis, it’s not a new imperative. Microsoft has been investing heavily in security, and over the years our commitment to building proactive security into products and services has only intensified.

To help deliver on this commitment, we continuously find ways to improve and secure Microsoft products. One aspect of our proactive security work is finding vulnerabilities and fixing them before they can be exploited. Our strategy is to take a holistic approach and drive security throughout the engineering lifecycle. We do this by:

  • Building security early into the design of features.
  • Developing tools and processes that proactively find vulnerabilities in code.
  • Introducing mitigations into Windows that make bugs significantly harder to exploit.
  • Having our world-class penetration testing team test the security boundaries of the product so we can fix issues before they can impact customers.

This proactive work ensures we are continuously making Windows safer and finding as many issues as possible before attackers can take advantage of them. In this blog post we will discuss a recent vulnerability that we proactively found and fixed and provide details on tools and techniques we used, including a new set of tools that we built internally at Microsoft. Our penetration testing team is constantly testing the security boundaries of the product to make it more secure, and we are always developing tools that help them scale and be more effective based on the evolving threat landscape. Our investment in fuzzing is the cornerstone of our work, and we are constantly innovating this tech to keep on breaking new ground.

Proactive security to prevent the next WannaCry

In the past few years, much of our team’s efforts have been focused on uncovering remote network vulnerabilities and preventing events like the WannaCry and NotPetya outbreaks. Some bugs we have recently found and fixed include critical vulnerabilities that could be leveraged to exploit common secure remote communication tools like RDP or create ransomware issues like WannaCry: CVE-2019-1181 and CVE-2019-1182 dubbed “DejaBlue“, CVE-2019-1226 (RCE in RDP Server), CVE-2020-0611 (RCE in RDP Client), and CVE-2019-0787 (RCE in RDP client), among others.

One of the biggest challenges we regularly face in these efforts is the sheer volume of code we analyze. Windows is enormous and continuously evolving 5.7 million source code files, with more than 3,500 developers doing 1,100 pull requests per day in 440 official branches. This rapid cadence and evolution allows us to add new features as well proactively drive security into Windows.

Like many security teams, we frequently turn to fuzzing to help us quickly explore and assess large codebases. Innovations we’ve made in our fuzzing technology have made it possible to get deeper coverage than ever before, resulting in the discovery of new bugs, faster. One such vulnerability is the remote code vulnerability (RCE) in Microsoft Server Message Block version 3 (SMBv3) tracked as CVE-2020-0796 and fixed on March 12, 2020.

In the following sections, we will share the tools and techniques we used to fuzz SMB, the root cause of the RCE vulnerability, and relevant mitigations to exploitation.

Fully deterministic person-in-the-middle fuzzing

We use a custom deterministic full system emulator tool we call “TKO” to fuzz and introspect Windows components.  TKO provides the capability to perform full system emulation and memory snapshottting, as well as other innovations.  As a result of its unique design, TKO provides several unique benefits to SMB network fuzzing:

  • The ability to snapshot and fuzz forward from any program state.
  • Efficiently restoring to the initial state for fast iteration.
  • Collecting complete code coverage across all processes.
  • Leveraging greater introspection into the system without too much perturbation.

While all of these actions are possible using other tools, our ability to seamlessly leverage them across both user and kernel mode drastically reduces the spin-up time for targets. To learn more, check out David Weston’s recent BlueHat IL presentation “Keeping Windows secure”, which touches on fuzzing, as well as the TKO tool and infrastructure.

Fuzzing SMB

Given the ubiquity of SMB and the impact demonstrated by SMB bugs in the past, assessing this network transfer protocol has been a priority for our team. While there have been past audits and fuzzers thrown against the SMB codebase, some of which postdate the current SMB version, TKO’s new capabilities and functionalities made it worthwhile to revisit the codebase. Additionally, even though the SMB version number has remained static, the code has not! These factors played into our decision to assess the SMB client/server stack.

After performing an initial audit pass of the code to understand its structure and dataflow, as well as to get a grasp of the size of the protocol’s state space, we had the information we needed to start fuzzing.

We used TKO to set up a fully deterministic feedback-based fuzzer with a combination of generated and mutated SMB protocol traffic. Our goal for generating or mutating across multiple packets was to dig deeper into the protocol’s state machine. Normally this would introduce difficulties in reproducing any issues found; however, our use of emulators made this a non-issue. New generated or mutated inputs that triggered new coverage were saved to the input corpus. Our team had a number of basic mutator libraries for different scenarios, but we needed to implement a generator. Additionally, we enabled some of the traditional Windows heap instrumentation using verifier, turning on page heap for SMB-related drivers.

We began work on the SMBv2 protocol generator and took a network capture of an SMB negotiation with the aim of replaying these packets with mutations against a Windows 10, version 1903 client. We added a mutator with basic mutations (e.g., bit flips, insertions, deletions, etc.) to our fuzzer and kicked off an initial run while we continued to improve and develop further.

Figure 1. TKO fuzzing workflow

A short time later, we came back to some compelling results. Replaying the first crashing input with TKO’s kdnet plugin revealed the following stack trace:

> tkofuzz.exe repro inputs\crash_6a492.txt -- kdnet:conn 127.0.0.1:50002

Figure 2. Windbg stack trace of crash

We found an access violation in srv2!Smb2CompressionDecompress.

Finding the root cause of the crash

While the stack trace suggested that a vulnerability exists in the decompression routine, it’s the parsing of length counters and offsets from the network that causes the crash. The last packet in the transaction needed to trigger the crash has ‘\xfcSMB’ set as the first bytes in its header, making it a COMPRESSION_TRANSFORM packet.

Figure 3. COMPRESSION_TRANSFORM packet details

The SMBv2 COMPRESSION_TRANSFORM packet starts with a COMPRESSION_TRANSFORM_HEADER, which defines where in the packet the compressed bytes begin and the length of the compressed buffer.

typedef struct _COMPRESSION_TRANSFORM_HEADER

{

UCHAR   Protocol[4]; // Contains 0xFC, 'S', 'M', 'B'

ULONG    OriginalMessageSize;

USHORT AlgorithmId;

USHORT Flags;

ULONG Length;

}

In the srv2!Srv2DecompressData in the graph below, we can find this COMPRESSION_TRANSFORM_HEADER struct being parsed out of the network packet and used to determine pointers being passed to srv2!SMBCompressionDecompress.

Figure 4. Srv2DecompressData graph

We can see that at 0x7e94, rax points to our network buffer, and the buffer is copied to the stack before the OriginalCompressedSegmentSize and Length are parsed out and added together at 0x7ED7 to determine the size of the resulting decompressed bytes buffer. Overflowing this value causes the decompression to write its results out of the bounds of the destination SrvNet buffer, in an out-of-bounds write (OOBW).

Figure 5. Overflow condition

Looking further, we can see that the Length field is parsed into esi at 0x7F04, added to the network buffer pointer, and passed to CompressionDecompress as the source pointer. As Length is never checked against the actual number of received bytes, it can cause decompression to read off the end of the received network buffer. Setting this Length to be greater than the packet length also causes the computed source buffer length passed to SmbCompressionDecompress to underflow at 0x7F18, creating an out-of-bounds read (OOBR) vulnerability. Combining this OOBR vulnerability with the previous OOBW vulnerability creates the necessary conditions to leak addresses and create a complete remote code execution exploit.

Figure 6. Underflow condition

Windows 10 mitigations against remote network vulnerabilities

Our discovery of the SMBv3 vulnerability highlights the importance of revisiting protocol stacks regularly as our tools and techniques continue to improve over time. In addition to the proactive hunting for these types of issues, the investments we made in the last several years to harden Windows 10 through mitigations like address space layout randomization (ASLR), Control Flow Guard (CFG), InitAll, and hypervisor-enforced code integrity (HVCI) hinder trivial exploitation and buy defenders time to patch and protect their networks.

For example, turning vulnerabilities like the ones discovered in SMBv3 into working exploits requires finding writeable kernel pages at reliable addresses, a task that requires heap grooming and corruption, or a separate vulnerability in Windows kernel address space layout randomization (ASLR). Typical heap-based exploits taking advantage of a vulnerability like the one described here would also need to make use of other allocations, but Windows 10 pool hardening helps mitigate this technique. These mitigations work together and have a cumulative effect when combined, increasing the development time and cost of reliable exploitation.

Assuming attackers gain knowledge of our address space, indirect jumps are mitigated by kernel-mode CFG. This forces attackers to either use data-only corruption or bypass Control Flow Guard via stack corruption or yet another bug. If virtualization-based security (VBS) and HVCI are enabled, attackers are further constrained in their ability to map and modify memory permissions.

On Secured-core PCs these mitigations are enabled by default.  Secured-core PCs combine virtualization, operating system, and hardware and firmware protection. Along with Microsoft Defender Advanced Threat Protection, Secured-core PCs provide end-to-end protection against advanced threats.

While these mitigations collectively lower the chances of successful exploitation, we continue to deepen our investment in identifying and fixing vulnerabilities before they can get into the hands of adversaries.

 

The post Mitigating vulnerabilities in endpoint network stacks appeared first on Microsoft Security.

Zero Trust Deployment Guide for Microsoft Azure Active Directory

April 30th, 2020 No comments

Microsoft is providing a series of deployment guides for customers who have engaged in a Zero Trust security strategy. In this guide, we cover how to deploy and configure Azure Active Directory (Azure AD) capabilities to support your Zero Trust security strategy.

For simplicity, this document will focus on ideal deployments and configuration. We will call out the integrations that need Microsoft products other than Azure AD and we will note the licensing needed within Azure AD (Premium P1 vs P2), but we will not describe multiple solutions (one with a lower license and one with a higher license).

Azure AD at the heart of your Zero Trust strategy

Azure AD provides critical functionality for your Zero Trust strategy. It enables strong authentication, a point of integration for device security, and the core of your user-centric policies to guarantee least-privileged access. Azure AD’s Conditional Access capabilities are the policy decision point for access to resources based on user identity, environment, device health, and risk—verified explicitly at the point of access. In the following sections, we will showcase how you can implement your Zero Trust strategy with Azure AD.

Establish your identity foundation with Azure AD

A Zero Trust strategy requires that we verify explicitly, use least privileged access principles, and assume breach. Azure Active Directory can act as the policy decision point to enforce your access policies based on insights on the user, device, target resource, and environment. To do this, we need to put Azure Active Directory in the path of every access request—connecting every user and every app or resource through this identity control plane. In addition to productivity gains and improved user experiences from single sign-on (SSO) and consistent policy guardrails, connecting all users and apps provides Azure AD with the signal to make the best possible decisions about the authentication/authorization risk.

  • Connect your users, groups, and devices:
    Maintaining a healthy pipeline of your employees’ identities as well as the necessary security artifacts (groups for authorization and devices for extra access policy controls) puts you in the best place to use consistent identities and controls, which your users already benefit from on-premises and in the cloud:

    1. Start by choosing the right authentication option for your organization. While we strongly prefer to use an authentication method that primarily uses Azure AD (to provide you the best brute force, DDoS, and password spray protection), follow our guidance on making the decision that’s right for your organization and your compliance needs.
    2. Only bring the identities you absolutely need. For example, use going to the cloud as an opportunity to leave behind service accounts that only make sense on-premises; leave on-premises privileged roles behind (more on that under privileged access), etc.
    3. If your enterprise has more than 100,000 users, groups, and devices combined, we recommend you follow our guidance building a high performance sync box that will keep your life cycle up-to-date.
  • Integrate all your applications with Azure AD:
    As mentioned earlier, SSO is not only a convenient feature for your users, but it’s also a security posture, as it prevents users from leaving copies of their credentials in various apps and helps avoid them getting used to surrendering their credentials due to excessive prompting. Make sure you do not have multiple IAM engines in your environment. Not only does this diminish the amount of signal that Azure AD sees and allow bad actors to live in the seams between the two IAM engines, it can also lead to poor user experience and your business partners becoming the first doubters of your Zero Trust strategy. Azure AD supports a variety of ways you can bring apps to authenticate with it:

    1. Integrate modern enterprise applications that speak OAuth2.0 or SAML.
    2. For Kerberos and Form-based auth applications, you can integrate them using the Azure AD Application Proxy.
    3. If you publish your legacy applications using application delivery networks/controllers, Azure AD is able to integrate with most of the major ones (such as Citrix, Akamai, F5, etc.).
    4. To help migrate your apps off of existing/older IAM engines, we provide a number of resources—including tools to help you discover and migrate apps off of ADFS.
  • Automate provisioning to applications:
    Once you have your users’ identities in Azure AD, you can now use Azure AD to power pushing those user identities into your various cloud applications. This gives you a tighter identity lifecycle integration within those apps. Use this detailed guide to deploy provisioning into your SaaS applications.
  • Get your logging and reporting in order:
    As you build your estate in Azure AD with authentication, authorization, and provisioning, it’s important to have strong operational insights into what is happening in the directory. Follow this guide to learn how to to persist and analyze the logs from Azure AD either in Azure or using a SIEM system of choice.

Enacting the 1st principle: least privilege

Giving the right access at the right time to only those who need it is at the heart of a Zero Trust philosophy:

  • Plan your Conditional Access deployment:
    Planning your Conditional Access policies in advance and having a set of active and fallback policies is a foundational pillar of your Access Policy enforcement in a Zero Trust deployment. Take the time to configure your trusted IP locations in your environment. Even if you do not use them in a Conditional Access policy, configure these IPs informs the risk of Identity Protection mentioned above. Check out our deployment guidance and best practices for resilient Conditional Access policies.
  • Secure privileged access with privileged identity management:
    With privileged access, you generally take a different track to meeting the end users where they are most likely to need and use the data. You typically want to control the devices, conditions, and credentials that users use to access privileged operations/roles. Check out our detailed guidance on how to take control of your privileged identities and secure them. Keep in mind that in a digitally transformed organization, privileged access is not only administrative access, but also application owner or developer access that can change the way your mission critical apps run and handle data. Check out our detailed guide on how to use Privileged Identity Management (P2) to secure privileged identities.
  • Restrict user consent to applications:
    User consent to applications is a very common way for modern applications to get access to organizational resources. However, we recommend you restrict user consent and manage consent requests to ensure that no unnecessary exposure of your organization’s data to apps occurs. This also means that you need to review prior/existing consent in your organization for any excessive or malicious consent.
  • Manage entitlements (Azure AD Premium P2):
    With applications centrally authenticating and driven from Azure AD, you should streamline your access request, approval, and recertification process to make sure that the right people have the right access and that you have a trail of why users in your organization have the access they have. Using entitlement management, you can create access packages that they can request as they join different teams/project and that would assign them access to the associated resources (applications, SharePoint sites, group memberships). Check out how you can start a package. If deploying entitlement management is not possible for your organization at this time, we recommend you at least enable self-service paradigms in your organization by deploying self-service group management and self-service application access.

Enacting the 2nd principle: verify explicitly

Provide Azure AD with a rich set of credentials and controls that it can use to verify the user at all times.

  • Roll out Azure multi-factor authentication (MFA) (P1):
    This is a foundational piece of reducing user session risk. As users appear on new devices and from new locations, being able to respond to an MFA challenge is one of the most direct ways that your users can teach us that these are familiar devices/locations as they move around the world (without having administrators parse individual signals). Check out this deployment guide.
  • Enable Azure AD Hybrid Join or Azure AD Join:
    If you are managing the user’s laptop/computer, bringing that information into Azure AD and use it to help make better decisions. For example, you may choose to allow rich client access to data (clients that have offline copies on the computer) if you know the user is coming from a machine that your organization controls and manages. If you do not bring this in, you will likely choose to block access from rich clients, which may result in your users working around your security or using Shadow IT. Check out our resources for Azure AD Hybrid Join or Azure AD Join.
  • Enable Microsoft Intune for managing your users’ mobile devices (EMS):
    The same can be said about user mobile devices as laptops. The more you know about them (patch level, jailbroken, rooted, etc.) the more you are able to trust or mistrust them and provide a rationale for why you block/allow access. Check out our Intune device enrollment guide to get started.
  • Start rolling out passwordless credentials:
    With Azure AD now supporting FIDO 2.0 and passwordless phone sign-in, you can move the needle on the credentials that your users (especially sensitive/privileged users) are using on a day-to-day basis. These credentials are strong authentication factors that can mitigate risk as well. Our passwordless authentication deployment guide walks you through how to roll out passwordless credentials in your organization.

Enacting the 3rd principle: assume breach

Provide Azure AD with a rich set of credentials and controls that it can use to verify the user.

  • Deploy Azure AD Password Protection:
    While enabling other methods to verify users explicitly, you should not forget about weak passwords, password spray and breach replay attacks. Read this blog to find out why classic complex password policies are not tackling the most prevalent password attacks. Then follow this guidance to enable Azure AD Password Protection for your users in the cloud first and then on-premises as well.
  • Block legacy authentication:
    One of the most common attack vectors for malicious actors is to use stolen/replayed credentials against legacy protocols, such as SMTP, that cannot do modern security challenges. We recommend you block legacy authentication in your organization.
  • Enable identity protection (Azure AD Premium 2):
    Enabling identity protection for your users will provide you with more granular session/user risk signal. You’ll be able to investigate risk and confirm compromise or dismiss the signal which will help the engine understand better what risk looks like in your environment.
  • Enable restricted session to use in access decisions:
    To illustrate, let’s take a look at controls in Exchange Online and SharePoint Online (P1): When a user’s risk is low but they are signing in from an unknown device, you may want to allow them access to critical resources, but not allow them to do things that leave your organization in a non-compliant state. Now you can configure Exchange Online and SharePoint Online to offer the user a restricted session that allows them to read emails or view files, but not download them and save them on an untrusted device. Check out our guides for enabling limited access with SharePoint Online and Exchange Online.
  • Enable Conditional Access integration with Microsoft Cloud App Security (MCAS) (E5):
    Using signals emitted after authentication and with MCAS proxying requests to application, you will be able to monitor sessions going to SaaS Applications and enforce restrictions. Check out our MCAS and Conditional Access integration guidance and see how this can even be extended to on-premises apps.
  • Enable Microsoft Cloud App Security (MCAS) integration with identity protection (E5):
    Microsoft Cloud App Security is a UEBA product monitoring user behavior inside SaaS and modern applications. This gives Azure AD signal and awareness about what happened to the user after they authenticated and received a token. If the user pattern starts to look suspicious (user starts to download gigabytes of data from OneDrive or starts to send spam emails in Exchange Online), then a signal can be fed to Azure AD notifying it that the user seems to be compromised or high risk and on the next access request from this user; Azure AD can take correct action to verify the user or block them. Just enabling MCAS monitoring will enrich the identity protection signal. Check out our integration guidance to get started.
  • Integrate Azure Advanced Threat Protection (ATP) with Microsoft Cloud App Security:
    Once you’ve successfully deployed and configured Azure ATP, enable the integration with Microsoft Cloud App Security to bring on-premises signal into the risk signal we know about the user. This enables Azure AD to know that a user is indulging in risky behavior while accessing on-premises, non-modern resources (like File Shares) which can then be factored into overall user risk to block further access in the cloud. You will be able to see a combined Priority Score for each user at risk to give a holistic view of which ones your SOC should focus on.
  • Enable Microsoft Defender ATP (E5):
    Microsoft Defender ATP allows you to attest to Windows machines health and whether they are undergoing a compromise and feed that into mitigating risk at runtime. Whereas Domain Join gives you a sense of control, Defender ATP allows you to react to a malware attack at near real time by detecting patterns where multiple user devices are hitting untrustworthy sites and react by raising their device/user risk at runtime. See our guidance on configuring Conditional Access in Defender ATP.

Conclusion

We hope the above guides help you deploy the identity pieces central to a successful Zero Trust strategy. Make sure to check out the other deployment guides in the series by following the Microsoft Security blog.

The post Zero Trust Deployment Guide for Microsoft Azure Active Directory appeared first on Microsoft Security.

Defending the power grid against supply chain attacks: Part 3 – Risk management strategies for the utilities industry

April 22nd, 2020 No comments

Over the last fifteen years, attacks against critical infrastructure (figure1) have steadily increased in both volume and sophistication. Because of the strategic importance of this industry to national security and economic stability, these organizations are targeted by sophisticated, patient, and well-funded adversaries.  Adversaries often target the utility supply chain to insert malware into devices destined for the power grid. As modern infrastructure becomes more reliant on connected devices, the power industry must continue to come together to improve security at every step of the process.

Aerial view of port and freeways leading to downtown Singapore.

Figure 1: Increased attacks on critical infrastructure

This is the third and final post in the “Defending the power grid against supply chain attacks” series. In the first blog I described the nature of the risk. Last month I outlined how utility suppliers can better secure the devices they manufacture. Today’s advice is directed at the utilities. There are actions you can take as individual companies and as an industry to reduce risk.

Implement operational technology security best practices

According to Verizon’s 2019 Data Breach Investigations Report, 80 percent of hacking-related breaches are the result of weak or compromised passwords. If you haven’t implemented multi-factor authentication (MFA) for all your user accounts, make it a priority. MFA can significantly reduce the likelihood that a user with a stolen password can access your company assets. I also recommend you take these additional steps to protect administrator accounts:

  • Separate administrative accounts from the accounts that IT professionals use to conduct routine business. While administrators are answering emails or conducting other productivity tasks, they may be targeted by a phishing campaign. You don’t want them signed into a privileged account when this happens.
  • Apply just-in-time privileges to your administrator accounts. Just-in-time privileges require that administrators only sign into a privileged account when they need to perform a specific administrative task. These sign-ins go through an approval process and have a time limit. This will reduce the possibility that someone is unnecessarily signed into an administrative account.

 

Image 2

Figure 2: A “blue” path depicts how a standard user account is used for non-privileged access to resources like email and web browsing and day-to-day work. A “red” path shows how privileged access occurs on a hardened device to reduce the risk of phishing and other web and email attacks. 

  • You also don’t want the occasional security mistake like clicking on a link when administrators are tired or distracted to compromise the workstation that has direct access to these critical systems.  Set up privileged access workstations for administrative work. A privileged access workstation provides a dedicated operating system with the strongest security controls for sensitive tasks. This protects these activities and accounts from the internet. To encourage administrators to follow security practices, make sure they have easy access to a standard workstation for other more routine tasks.

The following security best practices will also reduce your risk:

  • Whitelist approved applications. Define the list of software applications and executables that are approved to be on your networks. Block everything else. Your organization should especially target systems that are internet facing as well as Human-Machine Interface (HMI) systems that play the critical role of managing generation, transmission, or distribution of electricity
  • Regularly patch software and operating systems. Implement a monthly practice to apply security patches to software on all your systems. This includes applications and Operating Systems on servers, desktop computers, mobile devices, network devices (routers, switches, firewalls, etc.), as well as Internet of Thing (IoT) and Industrial Internet of Thing (IIoT) devices. Attackers frequently target known security vulnerabilities.
  • Protect legacy systems. Segment legacy systems that can no longer be patched by using firewalls to filter out unnecessary traffic. Limit access to only those who need it by using Just In Time and Just Enough Access principles and requiring MFA. Once you set up these subnets, firewalls, and firewall rules to protect the isolated systems, you must continually audit and test these controls for inadvertent changes, and validate with penetration testing and red teaming to identify rogue bridging endpoint and design/implementation weaknesses.
  • Segment your networks. If you are attacked, it’s important to limit the damage. By segmenting your network, you make it harder for an attacker to compromise more than one critical site. Maintain your corporate network on its own network with limited to no connection to critical sites like generation and transmission networks. Run each generating site on its own network with no connection to other generating sites. This will ensure that should a generating site become compromised, attackers can’t easily traverse to other sites and have a greater impact.
  • Turn off all unnecessary services. Confirm that none of your software has automatically enabled a service you don’t need. You may also discover that there are services running that you no longer use. If the business doesn’t need a service, turn it off.
  • Deploy threat protection solutions. Services like Microsoft Threat Protection help you automatically detect, respond to, and correlate incidents across domains.
  • Implement an incident response plan: When an attack happens, you need to respond quickly to reduce the damage and get your organization back up and running. Refer to Microsoft’s Incident Response Reference Guide for more details.

Speak with one voice

Power grids are interconnected systems of generating plants, wires, transformers, and substations. Regional electrical companies work together to efficiently balance the supply and demand for electricity across the nation. These same organizations have also come together to protect the grid from attack. As an industry, working through organizations like the Edison Electric Institute (EEI), utilities can define security standards and hold manufacturers accountable to those requirements.

It may also be useful to work with The Federal Energy Regulatory Committee (FERC), The North American Electric Reliability Corporation (NERC), or The United States Nuclear Regulatory Commission (U.S. NRC) to better regulate the security requirements of products manufactured for the electrical grid.

Apply extra scrutiny to IoT devices

As you purchase and deploy IoT devices, prioritize security. Be careful about purchasing products from countries that are motivated to infiltrate critical infrastructure. Conduct penetration tests against all new IoT and IIoT devices before you connect them to the network. When you place sensors on the grid, you’ll need to protect them from both cyberattacks and physical attacks. Make them hard to reach and tamper-proof.

Collaborate on solutions

Reducing the risk of a destabilizing power grid attack will require everyone in the utility industry to play a role. By working with manufacturers, trade organizations, and governments, electricity organizations can lead the effort to improve security across the industry. For utilities in the United States, several public-private programs are in place to enhance the utility industry capabilities to defend its infrastructure and respond to threats:

Read Part 1 in the series: “Defending the power grid against cyberattacks

Read “Defending the power grid against supply chain attacks: Part 2 – Securing hardware and software

Read how Microsoft Threat Protection can help you better secure your endpoints.

Learn how MSRC developed an incident response plan

Bookmark the Security blog to keep up with our expert coverage on security matters. For more information about our security solutions visit our website. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Defending the power grid against supply chain attacks: Part 3 – Risk management strategies for the utilities industry appeared first on Microsoft Security.

How to avoid getting caught in a “Groundhog Day” loop of security issues

October 8th, 2019 No comments

It’s Cyber Security Awareness Month and it made me think about one of my favorite movies, called Groundhog Day. Have you ever seen it? Bill Murray is the cynical weatherman, Phil Connors, who gets stuck in an endless loop where he repeats the same day over and over again until he “participates in his own rescue” by becoming a better person.

Sometimes it can feel like we’re caught in our own repetitious loops in cybersecurity—I even did a keynote at RSA APJ on this very topic a few years ago. The good news is that we can get out of the loop. By learning lessons from the past and bringing them forward and applying them to today’s technologies, outcomes can be changed—with “change” being the operative word.

If companies continue to do things the same way—in insecure ways—attackers will come along and BOOM you’re in trouble. You may resolve that breach, but that won’t help in the long run. Unless the source of the problem is determined and changed, just like Phil Connors, you’ll wake up one day and BOOM—you’re attacked again.

How security experts can help organizations protect against cybercrime

We can learn from past mistakes. And to prove it, I’d like to cite a heartening statistic. Ransomware encounters decreased by 60 percent between March 2017 and December 2018. While attackers don’t share the specifics about their choice of approach, when one approach isn’t working, they move to another. After all, it’s a business—in fact it’s a successful (and criminal) business—bringing in nearly $200 billion in profits each year.1 We do know that ransomware has less of chance of spreading on fully patched and well-segmented networks and companies are less likely to pay ransoms when they have up-to-date, clean backups to restore from. In other words, it’s very likely that robust cybersecurity hygiene is an important contributor to the decrease in ransomware encounters. (See Lesson 1: Practice good cybersecurity hygiene below.)

The bad news of course is that attackers began to shift their efforts to crimes like cryptocurrency mining, which hijacks victims’ computing resources to make digital money for the attackers.1 But that’s because cybercriminals are opportunists and they’re always searching for the weakest link.

One of the best ways to thwart cybercrime is to involve security experts before deploying new products and/or services. A decade ago, this wasn’t typically done in many organizations. But with the rise of security awareness as part of the overall corporate risk posture, we’re seeing security involved early on in deployments of modern architectures, container deployments, digital transformations, and DevOps.

When security experts connect the wisdom of the past—such as the importance of protecting data in transit with encryption—to the technology rollouts of today, they can help organizations anticipate what could go wrong. This helps you bake controls and processes into your products and services before deployment. The people who have already learned the lessons you need to know can help so you don’t wake up to the same problems every (well, almost) day. When security experts carry those lessons forward, they can help end your Groundhog Day.

In addition, involving security experts early on doesn’t have to slow things down. They can actually help speed things up and prevent backtracking later in the product development cycle to fix problems missed the first time around.

Security can help anticipate problems and produce solutions before they occur. When Wi-Fi networking was first being deployed in the late 1990s, communications were protected with Wired Equivalent Privacy (WEP). But WEP suffered from significant design problems such as the initialization vector (IV) being part of the RC4 encryption key that were already known issues in the cryptographic community. The result was a lot of WEP crackers and the rapid development of the stronger Wi-Fi Protected Access (WPA) set of protocols. If designers had worked with crypto experts, who already had designed a solution free of known issues, time, money, and privacy could have been saved.

Traditional technology thinks about “use” cases. Security thinks about “misuse” cases. Product people focus on the business and social benefits of a solution. Security people think about the risks and vulnerabilities by asking these questions:

  • What happens if the solutions are attacked or used improperly?
  • How is this product or workload going to behave in a non-perfect environment?
  • Where is your system vulnerable and what happens when it comes under attack?

Security also remembers lessons learned while creating threat models to head off common mistakes at the past.

Rita: I didn’t know you could play like that.

Phil: I’m versatile.

Groundhog Day (1993) starring Bill Murray as Phil and Andie McDowell as Rita. Sony Pictures©

Example: Think about designing a car. Cars are cool because they can go fast—really fast. But if you had some security folks on the team, they’d be thinking about the fact that while going fast can be thrilling—you’re going to have to stop at some point.

Security are the kind of thinkers who would probably suggest brakes. And they would make sure that those brakes worked in the rain, snow, and on ice just as well as they worked on dry pavement. Furthermore—because security is obsessed (in a good way) with safety—they would be the ones to plan for contingencies, like having a spare tire and jack in the car in case you get a flat tire.

Learning from and planning for known past issues, like the network equivalent of flat tires, is a very important part of secure cyber design. Machine learning can provide intelligence to help avoid repeats of major attacks. For example, machine learning is very useful in detecting and dismantling fileless malware that lives “off the land” like the recent Astaroth campaign.

Top practices inspired by lessons learned by helping organizations be more secure

Thinking about and modeling for the types of problems that have occurred in the past helps keep systems more secure in the future. For example, we take off our shoes in the airport because someone smuggled explosives onto a plane by hiding it in their footwear.

How DO you stop someone who wants to steal, manipulate, or damage the integrity of your data? What can you do to stop them from trying to monetize it and put your company and customers in jeopardy of losing their privacy? I’m glad you asked—here are four lessons that can help your organization be more secure:

Lesson 1: Practice good cybersecurity hygiene—It may not be shiny and new, but cybersecurity hygiene really matters. This is perhaps the most important lesson we can learn from the past—taking steps to ensure the basics are covered can go a very long way for security. That 60 percent decrease in ransomware encounters globally mentioned earlier is most likely due to better cybersecurity hygiene.

Lesson 2: Schedule regular backups—With regular backups (especially cold backups, held offline), you always have an uncompromised version of your data.

Lesson 3: Use licensed software—Licensed software decreases the likelihood that bugs, worms, and other bad things won’t be infiltrating your infrastructure. Deploying necessary patching that makes systems less vulnerable to exploit is part of keeping the integrity of your licensed software intact.

Lesson 4: Lean into humans “being human” while leveraging technological advances—For example, acknowledge that humans aren’t great at remembering strong passwords, especially when they change frequently. Rather than berating people for their very human brains, focus on developing solutions, such as password wallets and passwordless solutions, which acknowledge how hard strong passwords are to remember without sacrificing security.

Rita: Do you ever have déjà vu?

Phil: Didn’t you just ask me that?

Groundhog Day (1993) Sony Pictures©

Admittedly, we can’t promise there won’t be some share of Groundhog Day repeats. But the point is progress, not perfection. And we are making significant progress in our approach to cybersecurity and resilience. Above are just a couple of examples.

I’d love to hear more from you about examples you may have to share, too! Reach out to me on LinkedIn or Twitter, @DianaKelley14. Also, bookmark the Security blog to keep up with our expert coverage on security matters.

To learn more about how you can protect your time and empower your team, check out the cybersecurity awareness page this month.

1Cybercrime Profits Total nearly $200 Billion Each Year, Study Reveals

The post How to avoid getting caught in a “Groundhog Day” loop of security issues appeared first on Microsoft Security.