Archive

Archive for October, 2019

Improve security with a Zero Trust access model

October 29th, 2019 No comments

Zero Trust is a security model that I believe can begin to turn the tide in the cybersecurity battles. Traditional perimeter-based network security has proved insufficient because it assumes that if a user is inside the corporate perimeter, they can be trusted. We’ve learned that this isn’t true. Bad actors use methods like password spray and phishing to take advantage of a workforce that must remember too many usernames and passwords. Once behind the corporate firewall, a malicious user can often move freely, gaining higher privileges and access to sensitive data. We simply can’t trust users based on a network as the control plane.

The good news is that there is a solution. Zero Trust is a security strategy that upends the current broad trust model. Instead of assuming trustworthiness, it requires validation at every step of the process. This means that all touchpoints in a system—identities, devices, and services—are verified before they are considered trustworthy. It also means that user access is limited to only the data, systems, and applications required for their role. By moving from a model that assumes trust to one that requires verification, we can reduce the number and severity of security breaches.

You can begin implementing a Zero Trust access model now. Expect this to be a multi-year process, but with every action, you’ll make incremental progress that improves your security posture. Start with implementing Multi-Factor Authentication (MFA) to better protect your identities and then develop a phased plan to address identity access, device access, and network access. This is the approach that Microsoft has taken.

Take a look at our Zero Trust access model implementation plan for more ideas on how to structure each phase. You can also look at my advice on preparing your organization for passwordless for tips on better securing your identities.

We are on this journey together. I will continue to share insights and advice in the coming months and years.

The post Improve security with a Zero Trust access model appeared first on Microsoft Security.

Categories: CISO series, Zero Trust Tags:

Gartner names Microsoft a Leader in the 2019 Cloud Access Security Broker (CASB) Magic Quadrant

October 29th, 2019 No comments

In Gartner’s third annual Magic Quadrant for Cloud Access Security Brokers (CASB), Microsoft was named a Leader based on its completeness of vision and ability to execute in the CASB market. Microsoft was also identified as strongest in execution.

Gartner led the industry when they defined the term CASB in 2012. We believe their report points out a key fact for the market, that Microsoft currently has the largest customer base of all participating vendors. We believe that this, along with being ranked as a Leader, reflects our continued commitment to building the best possible solution for our customers and our goal to find innovative ways of helping them better protect their Microsoft and third-party cloud apps and platforms.

Image of the Gartner Magic Quadrant, showing Microsoft as a Leader in completeness of vision and ability to execute.

This recognition comes at a great point in our evolution journey. We’re guided by a strong vision to provide a customer-centric, best-in-class CASB solution that easily integrates with our customers’ existing environment, simplifies deployment, and optimizes the experience for admins, SecOps, and end users alike.

In customer conversations, many of them embrace a similar set of key product differentiators, some of which are also referred to in the Gartner report including:

  • The ability to monitor and control any app across cloud, on-premises, and custom apps.
  • Extensive integration across products, while also offering the ability to integrate with third-party solutions.
  • Extensive set of built-in threat-protection policies and a user and entity behavior analytics (UEBA) interface that provides a consolidated risk timeline and score for each user to help prioritize investigations across hybrid identities.

As we continue to build powerful, new capabilities for our CASB offering, we’re leveraging the unique ability to natively integrate with other best-in-class solutions from Microsoft’s Security and Identity portfolio including Azure Active Directory, Microsoft Defender Advanced Threat Protection, Microsoft Intune, and more. This allows us to deliver unique CASB capabilities, provide customers with fully integrated solutions across their portfolio, and achieve single-click deployments.

CASBs are essential to any modern Cloud Security strategy to provide a central point of monitoring and control. It enables IT departments to ensure secure access and protect the flow of critical data with a consistent set of controls across the increasing number of apps and cloud workloads.

With Microsoft Ignite around the corner, we look forward to more exciting announcements in November. As you continue to plan for the needs of your organization, please let us know how we can support the work you’re doing with Microsoft 365 by reaching out to your account team.

Learn more

Read the complimentary report for the analysis behind Microsoft’s position as a Leader.

For more information about our CASB solution, visit our website and stay up to date with our blog. Want to see our CASB in action? Start a free trial today.

Gartner Magic Quadrant for Cloud Access Security Brokers, Steve Riley, Craig Lawson, 22 October 2019.

The post Gartner names Microsoft a Leader in the 2019 Cloud Access Security Broker (CASB) Magic Quadrant appeared first on Microsoft Security.

Experts on demand: Your direct line to Microsoft security insight, guidance, and expertise

October 28th, 2019 No comments

Microsoft Threat Experts is the managed threat hunting service within Microsoft Defender Advanced Threat Protection (ATP) that includes two capabilities: targeted attack notifications and experts on demand.

Today, we are extremely excited to share that experts on demand is now generally available and gives customers direct access to real-life Microsoft threat analysts to help with their security investigations.

With experts on demand, Microsoft Defender ATP customers can engage directly with Microsoft security analysts to get guidance and insights needed to better understand, prevent, and respond to complex threats in their environments. This capability was shaped through partnership with multiple customers across various verticals by investigating and helping mitigate real-world attacks. From deep investigation of machines that customers had a security concern about, to threat intelligence questions related to anticipated adversaries, experts on demand extends and supports security operations teams.

The other Microsoft Threat Experts capability, targeted attack notifications, delivers alerts that are tailored to organizations and provides as much information as can be quickly delivered to bring attention to critical threats in their network, including the timeline, scope of breach, and the methods of intrusion. Together, the two capabilities make Microsoft Threat Experts a comprehensive managed threat hunting solution that provides an additional layer of expertise and optics for security operations teams.

Experts on the case

By design, the Microsoft Threat Experts service has as many use cases as there are unique organizations with unique security scenarios and requirements. One particular case showed how an alert in Microsoft Defender ATP led to informed customer response, aided by a targeted attack notification that progressed to an experts on demand inquiry, resulting in the customer fully remediating the incident and improving their security posture.

In this case, Microsoft Defender ATP endpoint protection capabilities recognized a new malicious file in a single machine within an organization. The organization’s security operations center (SOC) promptly investigated the alert and developed the suspicion it may indicate a new campaign from an advanced adversary specifically targeting them.

Microsoft Threat Experts, who are constantly hunting on behalf of this customer, had independently spotted and investigated the malicious behaviors associated with the attack. With knowledge about the adversaries behind the attack and their motivation, Microsoft Threat Experts sent the organization a bespoke targeted attack notification, which provided additional information and context, including the fact that the file was related to an app that was targeted in a documented cyberattack.

To create a fully informed path to mitigation, experts pointed to information about the scope of compromise, relevant indicators of compromise, and a timeline of observed events, which showed that the file executed on the affected machine and proceeded to drop additional files. One of these files attempted to connect to a command-and-control server, which could have given the attackers direct access to the organization’s network and sensitive data. Microsoft Threat Experts recommended full investigation of the compromised machine, as well as the rest of the network for related indicators of attack.

Based on the targeted attack notification, the organization opened an experts on demand investigation, which allowed the SOC to have a line of communication and consultation with Microsoft Threat Experts. Microsoft Threat Experts were able to immediately confirm the attacker attribution the SOC had suspected. Using Microsoft Defender ATP’s rich optics and capabilities, coupled with intelligence on the threat actor, experts on demand validated that there were no signs of second-stage malware or further compromise within the organization. Since, over time, Microsoft Threat Experts had developed an understanding of this organization’s security posture, they were able to share that the initial malware infection was the result of a weak security control: allowing users to exercise unrestricted local administrator privilege.

Experts on demand in the current cybersecurity climate

On a daily basis, organizations have to fend off the onslaught of increasingly sophisticated attacks that present unique security challenges in security: supply chain attacks, highly targeted campaigns, hands-on-keyboard attacks. With Microsoft Threat Experts, customers can work with Microsoft to augment their security operations capabilities and increase confidence in investigating and responding to security incidents.

Now that experts on demand is generally available, Microsoft Defender ATP customers have an even richer way of tapping into Microsoft’s security experts and get access to skills, experience, and intelligence necessary to face adversaries.

Experts on demand provide insights into attacks, technical guidance on next steps, and advice on risk and protection. Experts can be engaged directly from within the Microsoft Defender Security Center, so they are part of the existing security operations experience:

We are happy to bring experts on demand within reach of all Microsoft Defender ATP customers. Start your 90-day free trial via the Microsoft Defender Security Center today.

Learn more about Microsoft Defender ATP’s managed threat hunting service here: Announcing Microsoft Threat Experts.

 

 

The post Experts on demand: Your direct line to Microsoft security insight, guidance, and expertise appeared first on Microsoft Security.

IoT security will set innovation free: Azure Sphere general availability scheduled for February 2020

October 28th, 2019 No comments

Today, at the IoT Solutions World Congress, we announced that Azure Sphere will be generally available in February of 2020. General availability will mark our readiness to fulfill our security promise at scale, and to put the power of Microsoft’s expertise to work for our customers every day—by delivering over a decade of ongoing security improvements and OS updates delivered directly to each device.

Since we first introduced Azure Sphere in 2018, the IoT landscape has quickly expanded. Today, there are more connected things than people in the world: 14.2 billion in 2019, according to Gartner, and this number is expected to hit 20 billion by 2020. Although this number appears large, we expect IoT adoption to accelerate to provide connectivity to hundreds of billions of devices. This massive growth will only increase the stakes for devices that are not secured.

Recent research by Bain & Co. lists security as the leading barrier to IoT adoption. In fact, enterprise customers would buy at least 70 percent more IoT devices if a product addresses their concerns about cybersecurity. According to Bain & Co., enterprise executives, with an innate understanding of the risk that connectivity opens their brands and customers to, are willing to pay a 22 percent premium for secured devices.

Azure Sphere’s mission is to empower every organization on the planet to connect and create secured and trustworthy IoT devices. We believe that for innovation to deliver durable value, it must be built on a foundation of security. Our customers need and expect reliable, consistent security that will set innovation free. To deliver on this, we’ve made several strategic investments and partnerships that make it possible to meet our customers wherever they are on their IoT journey.

Delivering silicon choice to enable heterogeneity at the edge

By partnering with silicon leaders, we can combine our expertise in security with their unique capabilities to best serve a diverse set of customer needs.

MediaTek’s MT3620, the first Azure Sphere certified chip produced, is designed to meet the needs of the more traditional MCU space, including Wi-Fi-enabled scenarios. Today, our customers across industries are adopting the MT3620 to design and produce everything from consumer appliances to retail and manufacturing equipment—these chips are also being used to power a series of guardian modules to securely connect and protect mission-critical equipment.

In June, we announced our collaboration with NXP to deliver a new Azure Sphere certified chip. This new chip will be an extension of their popular i.MX 8 high-performance applications processor series and be optimized for performance and power. This will bring greater compute capabilities to our line-up to support advanced workloads, including artificial intelligence (AI), graphics, and richer UI experiences.

Earlier this month, we announced our collaboration with Qualcomm to deliver the first cellular-enabled Azure Sphere chip. With ultra-low-power capabilities this new chip will light up a broad new set of scenarios and give our customers the freedom to securely connect anytime, anywhere.

Streamlining prototyping and production with a diverse hardware ecosystem

Manufacturers are looking for ways to reduce cost, complexity, and time to market when designing new devices and equipment. Azure Sphere development kits from our partners at Seeed Studios and Avnet are designed to streamline the prototyping and planning when building Azure Sphere devices. When you’re ready to shift gears into production mode, there are a variety of modules by partners including AI-Link, USI, and Avnet to help you reduce costs and accelerate production so you can get to market faster.

Adding secured connectivity to existing mission-critical equipment

Many enterprises are looking to unlock new value from existing equipment through connectivity. Guardian modules are designed to help our customers quickly bring their existing investments online without taking on risk and jeopardizing mission-critical equipment. Guardian modules plug into existing physical interfaces on equipment, can be easily deployed with common technical skillsets, and require no device redesign. The deployment is fast, does not require equipment to be replaced before its end of life, and quickly pays for itself. The first guardian modules are available today from Avnet and AI-Link, with more expected soon.

Empowering developers with the right tools

Developers need tools that are as modern as the experiences they aspire to deliver. In September of 2018, we released our SDK preview for Visual Studio. Since then, we’ve continued to iterate rapidly, making it quicker and simpler to develop, deploy, and debug Azure Sphere apps. We also built out a set of samples and solutions on GitHub, providing easy building blocks for developers to get started. And, as we shared recently, we’ll soon have an SDK for Linux and support for Visual Studio Code. By empowering their developers, we help manufacturers bring innovation to market faster.

Creating a secure environment for running an RTOS or bare-metal code

As manufacturers transform MCU-powered devices by adding connectivity, they want to leverage existing code running on an RTOS or bare-metal. Earlier this year, we provided a secured environment for this code by enabling the M4 core processors embedded in the MediaTek MT3620 chip. Code running on these real-time cores is programmed and debugged using Visual Studio. Using these tools, such code can easily be enhanced to send and receive data via the protection of a partner app running on the Azure Sphere OS, and it can be updated seamlessly in the field to add features or to address issues. Now, manufacturers can confidently secure and service their connected devices, while leveraging existing code for real-time processing operations.

Delivering customer success

Deep partnerships with early customers have helped us understand how IoT can be implemented to propel business, and the critical role security plays in protecting their bottom line, brand, and end users. Today, we’re working with hundreds of customers who are planning Azure Sphere deployments, here are a few highlights from across retail, healthcare, and energy:

  • Starbucks—In-store equipment is the backbone of not just commerce, but their entire customer experience. To reduce disruptions and maintain a quality experience, Starbucks is partnering with Microsoft to deploy Azure Sphere across its existing mission-critical equipment in stores globally using guardian modules.
  • Gojo—Gojo Industries, the inventor of PURELL Hand Sanitizer, has been driving innovation to improve hygiene compliance in health organizations. Deploying motion detectors and connected PURELL dispensers in healthcare facilities made it possible to quantify hand cleaning behavior in a way that made it possible to implement better practices. Now, PURELL SMARTLINK Technology is undergoing an upgrade with Azure Sphere to deploy secure and connected dispensers in hospitals.
  • Leoni—Leoni develops cable systems that are central components within critical application fields that manage energy and data for the automotive sector and other industries. To make cable systems safer, more reliable, and smarter, Leoni uses Azure Sphere with integrated sensors to actively monitor cable conditions, creating intelligent and connected cable systems.

Looking forward

We want to empower every organization on the planet to connect and create secure and trustworthy IoT devices. While Azure Sphere leverages deep and extensive Microsoft heritage that spans hardware, software, cloud, and security, IoT is our opportunity to prove we can deliver in a new space. Our work, our collaborations, and our partnerships are evidence of the commitment we’ve made to our customers—to give them the tools and confidence to transform the world with new experiences. As we close in on the milestone achievement of Azure Sphere general availability, we are already focused on how to give our customers greater opportunities to securely shape the future.

The post IoT security will set innovation free: Azure Sphere general availability scheduled for February 2020 appeared first on Microsoft Security.

Time for day 2 of briefings at BlueHat Seattle!

October 25th, 2019 No comments

We hope you enjoyed the first day of our BlueHat briefings and the Bytes of BlueHat reception in our glamping tent (complete with toasted marshmallows). Yesterday, we learned a lot about how XboxOne hardware security has advanced the state of hardware security elsewhere, we heard some surprising correlations between vuln severity, age, and time to …

Time for day 2 of briefings at BlueHat Seattle! Read More »

The post Time for day 2 of briefings at BlueHat Seattle! appeared first on Microsoft Security Response Center.

Security baseline (DRAFT) for Chromium-based Microsoft Edge, version 78

October 24th, 2019 No comments

Microsoft is pleased to announce the draft release of the recommended security configuration baseline settings for the next version of Microsoft Edge based on Chromium, version 78. Please evaluate this proposed baseline and send us your feedback through the Baselines Discussion site.


 


Like all our baseline packages, the downloadable draft baseline package (attached to this blog post) includes importable GPOs, a script to apply the GPOs to local policy, a script to import the GPOs into Active Directory Group Policy, and all the recommended settings in spreadsheet form, as Policy Analyzer rules, and as GP Reports.


 


Microsoft Edge is being rebuilt with the open-source Chromium project, and many of its security configuration options are inherited from that project. These Group Policy settings are entirely distinct from those for the original version of Microsoft Edge built into Windows 10: they are in different folders in the Group Policy editor and they reference different registry keys. The Group Policy settings that control the new version of Microsoft Edge are located under “Administrative Templates\Microsoft Edge,” while those that control the current version of Microsoft Edge remain located under “Administrative Templates\Windows Components\Microsoft Edge.” You can download the latest policy templates for the new version of Microsoft Edge from the Microsoft Edge Enterprise landing page. To learn more about managing the new version of Microsoft Edge, see Configure Microsoft Edge for Windows.


 


As with our current Windows and Office security baselines, our recommendations for Microsoft Edge configuration follow a streamlined and efficient approach to baseline definition when compared with the baselines we published before Windows 10. The foundation of that approach is essentially this:



  • The baselines are designed for well-managed, security-conscious organizations in which standard end users do not have administrative rights.

  • A baseline enforces a setting only if it mitigates a contemporary security threat and does not cause operational issues that are worse than the risks they mitigate.

  • A baseline enforces a default only if it is otherwise likely to be set to an insecure state by an authorized user:

    • If a non-administrator can set an insecure state, enforce the default.

    • If setting an insecure state requires administrative rights, enforce the default only if it is likely that a misinformed administrator will otherwise choose poorly.




(For further explanation, see the “Why aren’t we enforcing more defaults?” section in this blog post.)


 


Version 78 of the Chromium-based version of Microsoft Edge has 205 enforceable Computer Configuration policy settings and another 190 User Configuration policy settings. Following our streamlined approach, our recommended baseline configures a grand total of twelve Group Policy settings. You can find full documentation in the download package’s Documentation subdirectory.


 

Categories: Uncategorized Tags:

Welcome to the second stage of BlueHat!

October 24th, 2019 No comments

We’ve finished two incredible days of security trainings at the Living Computer Museum in Seattle. Now it’s time for the second part of BlueHat: the briefings at ShowBox SoDo. We’ve got a big day planned, so head on down. Please join us for breakfast (we have doughnuts! and bacon! and cereal!) when the doors open …

Welcome to the second stage of BlueHat! Read More »

The post Welcome to the second stage of BlueHat! appeared first on Microsoft Security Response Center.

Traditional perimeter-based network defense is obsolete—transform to a Zero Trust model

October 23rd, 2019 No comments

Digital transformation has made the traditional perimeter-based network defense obsolete. Your employees and partners expect to be able to collaborate and access organizational resources from anywhere, on virtually any device, without impacting their productivity. Customers expect personalized experiences that demonstrate you understand them and can adapt quickly to their evolving interests. Companies need to be able to move with agility, adapting quickly to changing market conditions and take advantage of new opportunities. Companies embracing this change are thriving, leaving those who don’t in their wake.

As organizations drive their digital transformation efforts, it quickly becomes clear that the approach to securing the enterprise needs to be adapted to the new reality. The security perimeter is no longer just around the on-premises network. It now extends to SaaS applications used for business critical workloads, hotel and coffee shop networks your employees are using to access corporate resources while traveling, unmanaged devices your partners and customers are using to collaborate and interact with, and IoT devices installed throughout your corporate network and inside customer locations. The traditional perimeter-based security model is no longer enough.

The traditional firewall (VPN security model) assumed you could establish a strong perimeter, and then trust that activities within that perimeter were “safe.” The problem is today’s digital estates typically consist of services and endpoints managed by public cloud providers, devices owned by employees, partners, and customers, and web-enabled smart devices that the traditional perimeter-based model was never built to protect. We’ve learned from both our own experience, and the customers we’ve supported in their own journeys, that this model is too cumbersome, too expensive, and too vulnerable to keep going.

We can’t assume there are “threat free” environments. As we digitally transform our companies, we need to transform our security model to one which assumes breach, and as a result, explicitly verifies activities and automatically enforces security controls using all available signal and employs the principle of least privilege access. This model is commonly referred to as “Zero Trust.”

Today, we’re publishing a new white paper to help you understand the core principles of Zero Trust along with a maturity model, which breaks down requirements across the six foundational elements, to help guide your digital transformation journey.

Download the Microsoft Zero Trust Maturity Model today!

Learn more about Zero Trust and Microsoft Security.

Also, bookmark the Security blog to keep up with our expert coverage on security matters. And follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

To learn more about how you can protect your time and empower your team, check out the cybersecurity awareness page this month.

 

The post Traditional perimeter-based network defense is obsolete—transform to a Zero Trust model appeared first on Microsoft Security.

Microsoft Identity Bounty Improvements

October 23rd, 2019 No comments

Microsoft and partners design new device security requirements to protect against targeted firmware attacks

October 21st, 2019 No comments

Recent developments in security research and real-world attacks demonstrate that as more protections are proactively built into the OS and in connected services, attackers are looking for other avenues of exploitation with firmware emerging as a top target. In the last three years alone, NIST’s National Vulnerability Database has shown nearly a five-fold increase in the number of firmware vulnerabilities discovered.

To combat threats specifically targeted at the firmware and operating system levels, we’re announcing a new initiative we’ve been working on with partners to design what we call Secured-core PCs. These devices, created in partnership with our PC manufacturing and silicon partners, meet a specific set of device requirements that apply the security best practices of isolation and minimal trust to the firmware layer, or the device core, that underpins the Windows operating system. These devices are designed specifically for industries like financial services, government and healthcare, and for workers that handle highly-sensitive IP, customer or personal data, including PII as these are higher value targets for nation-state attackers.

 

In late 2018, security researchers discovered that hacking group, Strontium has been using firmware vulnerabilities to target systems in the wild with malware delivered through a firmware attack. As a result, the malicious code was hard to detect and difficult to remove – it could persist even across common cleanup procedures like an OS re-install or a hard drive replacement.

Why attackers and researchers are devoting more effort toward firmware

Firmware is used to initialize the hardware and other software on the device and has a higher level of access and privilege than the hypervisor and operating system kernel thereby making it an attractive target for attackers. Attacks targeting firmware can undermine mechanisms like secure boot and other security functionality implemented by the hypervisor or operating system making it more difficult to identify when a system or user has been compromised. Compounding the problem is the fact that endpoint protection and detection solutions have limited visibility at the firmware layer given that they run underneath of the operating system, making evasion easier for attackers going after firmware.

What makes a Secured-core PC?

Secured-core PCs combine identity, virtualization, operating system, hardware and firmware protection to add another layer of security underneath the operating system. Unlike software-only security solutions, Secured-core PCs are designed to prevent these kinds of attacks rather than simply detecting them. Our investments in Windows Defender System Guard and Secured-core PC devices are designed to provide the rich ecosystem of Windows 10 devices with uniform assurances around the integrity of the launched operating system and verifiable measurements of the operating system launch to help mitigate against threats taking aim at the firmware layer. These requirements enable customers to boot securely, protect the device from firmware vulnerabilities, shield the operating system from attacks, prevent unauthorized access to devices and data, and ensure that identity and domain credentials are protected.

The built-in measurements can be used by SecOps and IT admins to remotely monitor the health of their systems using System Guard runtime attestation and implement a zero-trust network rooted in hardware. This advanced firmware security works in concert with other Windows features to ensure that Secured-core PCs provide comprehensive protections against modern threats.

 

Removing trust from the firmware

Starting with Windows 8, we introduced Secure Boot to mitigate the risk posed by malicious bootloaders and rootkits that relied on Unified Extensible Firmware Interface (UEFI) firmware to only allow properly signed bootloaders like the Windows boot manager to execute. This was a significant step forward to protect against these specific types of attacks. However, since firmware is already trusted to verify the bootloaders, Secure Boot on its own does not protect from threats that exploit vulnerabilities in the trusted firmware. That’s why we worked with our partners to ensure these new Secured-core capabilities are shipped in devices right out of the box.

Using new hardware capabilities from AMD, Intel, and Qualcomm, Windows 10 now implements System Guard Secure Launch as a key Secured-core PC device requirement to protect the boot process from firmware attacks. System Guard uses the Dynamic Root of Trust for Measurement (DRTM) capabilities that are built into the latest silicon from AMD, Intel, and Qualcomm to enable the system to leverage firmware to start the hardware and then shortly after re-initialize the system into a trusted state by using the OS boot loader and processor capabilities to send the system down a well-known and verifiable code path. This mechanism helps limit the trust assigned to firmware and provides powerful mitigation against cutting-edge, targeted threats against firmware. This capability also helps to protect the integrity of the virtualization-based security (VBS) functionality implemented by the hypervisor from firmware compromise. VBS then relies on the hypervisor to isolate sensitive functionality from the rest of the OS which helps to protect the VBS functionality from malware that may have infected the normal OS even with elevated privileges. Protecting VBS is critical since it is used as a building block for important OS security capabilities like Windows Defender Credential Guard which protects against malware maliciously using OS credentials and Hypervisor-protected Code Integrity (HVCI) which ensures that a strict code integrity policy is enforced and that all kernel code is signed and verified.

 

Being able to measure that the device booted securely is another critical piece of this additional layer of protection from firmware compromise that gives admins added confidence that their endpoints are safe. That’s why we implemented Trusted Platform Module 2.0 (TPM) as one of the device requirements for Secured-core PCs. By using the Trusted Platform Module 2.0 (TPM) to measure the components that are used during the secure launch process, we help customers enable zero trust networks using System Guard runtime attestation. Conditional access policies can be implemented based on the reports provided by the System Guard attestation client running in the isolated VBS environment.

In addition to the Secure Launch functionality, Windows implements additional safeguards that operate when the OS is running to monitor and restrict the functionality of potentially dangerous firmware functionality accessible through System Management Mode (SMM).

Beyond the hardware protection of firmware featured in Secured-core PCs, Microsoft recommends a defense-in-depth approach including security review of code, automatic updates, and attack surface reduction. Microsoft has provided an open-source firmware project called Project-Mu that PC manufactures can use as a starting point for secure firmware.

How to get a Secured-core PC

Our ecosystem partnerships have enabled us to add this additional layer of security in devices that are designed for highly-targeted industries and end-users who handle mission-critical data in some of the most data-sensitive industries like government, financial services, and healthcare, right-out-of-the-box. These innovations build on the value of Windows 10 Pro that comes with built-in protections like firewall, secure boot, and file-level information-loss protection which are standard on every device.

More information on devices that are verified Secured-core PC including those from Dell, Dynabook, HP, Lenovo, Panasonic and Surface can be found on our web page.

 

David Weston (@dwizzzleMSFT)
Partner Director, OS Security

The post Microsoft and partners design new device security requirements to protect against targeted firmware attacks appeared first on Microsoft Security.

Introducing the ElectionGuard Bounty program

October 18th, 2019 No comments

Best practices for adding layered security to Azure security with Check Point’s CloudGuard IaaS

October 18th, 2019 No comments

The cloud is changing the way we build and deploy applications. Most enterprises will benefit from the cloud’s many advantages through hybrid, multi, or standalone cloud architectures. A recent report showed that 42 percent of companies have a multi-cloud deployment strategy.

The advantages of the cloud include flexibility, converting large upfront infrastructure investments to smaller monthly bills (for example, the CAPEX to OPEX shift), agility, scalability, the capability to run applications and workloads at high speed, as well as high levels of reliability and availability.

However, cloud security is often an afterthought in this process. Some worry that it may slow the momentum of organizations that are migrating workloads into the cloud. Traditional IT security teams may be hesitant to implement new cloud security processes, because to them the cloud may be daunting or confusing, or just new and unknown.

Although the concepts may seem similar, cloud security is different than traditional enterprise security. Additionally, there may also be industry-specific compliance and security standards to be met.

Public cloud vendors have defined the Shared Responsibility Model where the vendor is responsible for the security “of” their cloud, while their customers are responsible for the security “in” the cloud.

Image showing teh Responsibility Zones for Microsoft Azure.

The Shared Responsibility Model (Source: Microsoft Azure).

Cloud deployments include multi-layered components, and the security requirements are often different per layer and per component. Often, the ownership of security is blurred when it comes to the application, infrastructure, and sometimes even the cloud platform—especially in multi-cloud deployments.

Cloud vendors, including Microsoft, offer fundamental network-layer, data-layer, and other security tools for use by their customers. Security analysts, managed security service providers, and advanced cloud customers recommend layering on advanced threat prevention and network-layer security solutions to protect against modern-day attacks. These specialized tools evolve at the pace of industry threats to secure the organization’s cloud perimeters and connection points.

Check Point is a leader in cloud security and the trusted security advisor to customers migrating workloads into the cloud.

Check Point’s CloudGuard IaaS helps protect assets in the cloud with dynamic scalability, intelligent provisioning, and consistent control across public, private, and hybrid cloud deployments. CloudGuard IaaS supports Azure and Azure Stack. Customers using CloudGuard IaaS can securely migrate sensitive workloads, applications, and data into Azure and thereby improve their security.

But how well does CloudGuard IaaS conform to Microsoft’s best practices?

Principal Program Manager of Azure Networking, Dr. Reshmi Yandapalli (DAOM), published a blog post titled Best practices to consider before deploying a network virtual appliance earlier this year, which outlined considerations when building or choosing Azure security and networking services. Dr. Yandapalli defined four best practices for networking and security ISVs—like Check Point—to improve the cloud experience for Azure customers.

I discussed Dr. Yandapalli’s four best practices with Amir Kaushansky, Check Point’s Head of Cloud Network Security Product Management. Amir’s responsibilities include the CloudGuard IaaS roadmap and coordination with the R&D/development team.

1. Azure accelerated networking support

Dr. Yandapalli’s first best practice in her blog is that the ISV’s Azure security solution is available on one or more Azure virtual machine (VM) type with Azure’s accelerated networking capability to improve networking performance. Dr. Yandapalli recommends that you “consider a virtual appliance that is available on one of the supported VM types with Azure’s accelerated networking capability.”

The diagram below shows communication between VMs, with and without Azure’s accelerated networking:

Image showing accelerated networking to improve performance of Azure security.

Accelerated networking to improve performance of Azure security (Source: Microsoft Azure).

Kaushansky says, “Check Point was the first certified compliant vendor with Azure accelerated networking. Accelerated networking can improve performance and reduce jitter, latency, and CPU utilization.”

According to Kaushansky—and depending on workload and VM size—Check Point and customers have observed at least a 2-3 times increase in throughput due to Azure accelerated networking.

2. Multi-Network Interface Controller (NIC) support

Dr. Yandapalli’s blog’s next best practice is to use VMs with multiple NICs to improve network traffic management via traffic isolation. For example, you can use one NIC for data plane traffic and one NIC for management plane traffic. Dr. Yandapalli states, “With multiple NICs you can better manage your network traffic by isolating various types of traffic across the different NICs.”

The diagram below shows the Azure Dv2-series with maximum NICs per VM size:

Image showing Azure Dv2-series VMs with number of NICs per size.

Azure Dv2-series VMs with # NICs per size.

CloudGuard IaaS supports multi-NIC VMs, without any maximum of the number of NICs. Check Point recommends the use of VMs with at least two NICs—VMs with one NIC are supported but not recommended.

Depending on the customer’s deployment architecture, the customer may use one NIC for internal East-West traffic and the second for outbound/inbound North-South traffic.

3. High Availability (HA) port with Azure load balancer

The Dr. Yandapalli’s third best practice is that Azure security and networking services should be reliable and highly available.

Dr. Yandapalli suggests the use of a High Availability (HA) port load balancing rule. “You would want your NVA to be reliable and highly available, to achieve these goals simply by adding network virtual appliance instances to the backend pool of your internal load balancer and configuring a HA ports load-balancer rule,” says Dr. Yandapalli.

The diagram below shows an example usage of a HA port:

Flowchart example of a HA port with Azure load balancer.

Kaushansky says, “CloudGuard IaaS supports this functionality with a standard load balancer via Azure Resource Manager deployment templates, which customers can use to deploy CloudGuard IaaS easily in HA mode.”

4. Support for Virtual Machine Scale Sets (VMSS)

The Dr. Yandapalli’s last best practice is to use Azure VMSS to provide HA. These also provide the management and automation layers for Azure security, networking, and other applications. This cloud-native functionality provides the right amount of IaaS resources at any given time, depending on application needs. Dr. Yandapalli points out that “scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update a large number of VMs.”

In a similar way to the previous best practice, customers can use an Azure Resource Manager deployment template to deploy CloudGuard in VMSS mode. Check Point recommends the use of VMSS for traffic inspection of North-South (inbound/outbound) and East-West (lateral movement) traffic.

Learn more and get a free trial

As you can see from the above, CloudGuard IaaS is compliant with all four of Microsoft’s common best practices for how to build and deploy Azure network security solutions.

Visit Check Point to understand how CloudGuard IaaS can help protect your data and infrastructure in Microsoft Azure and hybrid clouds and improve Azure network security. If you’re evaluating Azure security solutions, you can get a free 30-day evaluation license of CloudGuard IaaS on Azure Marketplace!

(Based on a blog published on June 4, 2019 in the Check Point Cloud Security blog.)

The post Best practices for adding layered security to Azure security with Check Point’s CloudGuard IaaS appeared first on Microsoft Security.

Announcing the Security Researcher Quarterly Leaderboard

October 17th, 2019 No comments

Right before Black Hat USA 2019, we announced our new researcher recognition program, and at Black Hat we announced the top researchers from the previous twelve months. Since it’s easier to track your progress with regular updates than with just an annual report, we are excited to announce the MSRC Q3 2019 Security Researcher Leaderboard, …

Announcing the Security Researcher Quarterly Leaderboard Read More »

The post Announcing the Security Researcher Quarterly Leaderboard appeared first on Microsoft Security Response Center.

An intern’s experience with Rust

October 16th, 2019 No comments

Over the course of my internship at the Microsoft Security Response Center (MSRC), I worked on the safe systems programming languages (SSPL) team to promote safer languages for systems programming where runtime overhead is important, as outlined in this blog. My job was to port a security critical network processing agent into Rust to eliminate …

An intern’s experience with Rust Read More »

The post An intern’s experience with Rust appeared first on Microsoft Security Response Center.

Top 6 email security best practices to protect against phishing attacks and business email compromise

October 16th, 2019 No comments

Most cyberattacks start over email—a user is tricked into opening a malicious attachment, or into clicking a malicious link and divulging credentials, or into responding with confidential data. Attackers dupe victims by using carefully crafted emails to build a false sense of trust and/or urgency. And they use a variety of techniques to do this—spoofing trusted domains or brands, impersonating known users, using previously compromised contacts to launch campaigns and/or using compelling but malicious content in the email. In the context of an organization or business, every user is a target and, if compromised, a conduit for a potential breach that could prove very costly.

Whether it’s sophisticated nation-state attacks, targeted phishing schemes, business email compromise or a ransomware attacks, such attacks are on the rise at an alarming rate and are also increasing in their sophistication. It is therefore imperative that every organization’s security strategy include a robust email security solution.

So, what should IT and security teams be looking for in a solution to protect all their users, from frontline workers to the C-suite? Here are 6 tips to ensure your organization has a strong email security posture:

You need a rich, adaptive protection solution.

As security solutions evolve, bad actors quickly adapt their methodologies to go undetected. Polymorphic attacks designed to evade common protection solutions are becoming increasingly common. Organizations therefore need solutions that focus on zero-day and targeted attacks in addition to known vectors. Purely standards based or known signature and reputation-based checks will not cut it.

Solutions that include rich detonation capabilities for files and URLs are necessary to catch payload-based attacks. Advanced machine learning models that look at the content and headers of emails as well as sending patterns and communication graphs are important to thwart a wide range of attack vectors including payload-less vectors such as business email compromise. Machine learning capabilities are greatly enhanced when the signal source feeding it is broad and rich; so, solutions that boast of a massive security signal base should be preferred. This also allows the solution to learn and adapt to changing attack strategies quickly which is especially important for a rapidly changing threat landscape.

Complexity breeds challenges. An easy-to-configure-and-maintain system reduces the chances of a breach.

Complicated email flows can introduce moving parts that are difficult to sustain. As an example, complex mail-routing flows to enable protections for internal email configurations can cause compliance and security challenges. Products that require unnecessary configuration bypasses to work can also cause security gaps. As an example, configurations that are put in place to guarantee delivery of certain type of emails (eg: simulation emails), are often poorly crafted and exploited by attackers.

Solutions that protect emails (external and internal emails) and offer value without needing complicated configurations or emails flows are a great benefit to organizations. In addition, look for solutions that offer easy ways to bridge the gap between the security teams and the messaging teams. Messaging teams, motivated by the desire to guarantee mail delivery, might create overly permissive bypass rules that impact security. The sooner these issues are caught the better for overall security. Solutions that offer insights to the security teams when this happens can greatly reduce the time taken to rectify such flaws thereby reducing the chances of a costly breach

A breach isn’t an “If”, it’s a “When.” Make sure you have post-delivery detection and remediation.

No solution is 100% effective on the prevention vector because attackers are always changing their techniques. Be skeptical of any claims that suggest otherwise. Taking an ‘assume breach’ mentality will ensure that the focus is not only on prevention, but on efficient detection and response as well. When an attack does go through the defenses it is important for security teams to quickly detect the breach, comprehensively identify any potential impact and effectively remediate the threat.

Solutions that offer playbooks to automatically investigate alerts, analyze the threat, assess the impact, and take (or recommend) actions for remediations are critical for effective and efficient response. In addition, security teams need a rich investigation and hunting experience to easily search the email corpus for specific indicators of compromise or other entities. Ensure that the solution allows security teams to hunt for threats and remove them easily.
Another critical component of effective response is ensuring that security teams have a good strong signal source into what end users are seeing coming through to their inbox. Having an effortless way for end users to report issues that automatically trigger security playbooks is key.

Your users are the target. You need a continuous model for improving user awareness and readiness.

An informed and aware workforce can dramatically reduce the number of occurrences of compromise from email-based attacks. Any protection strategy is incomplete without a focus on improving the level of awareness of end users.

A core component of this strategy is raising user awareness through Phish simulations, training them on things to look out for in suspicious emails to ensure they don’t fall prey to actual attacks. Another, often overlooked, but equally critical, component of this strategy, is ensuring that the everyday applications that end-users use are helping raise their awareness. Capabilities that offer users relevant cues, effortless ways to verify the validity of URLs and making it easy to report suspicious emails within the application — all without compromising productivity — are very important.

Solutions that offer Phish simulation capabilities are key. Look for deep email-client-application integrations that allow users to view the original URL behind any link regardless of any protection being applied. This helps users make informed decisions. In addition, having the ability to offer hints or tips to raise specific user awareness on a given email or site is also important. And, effortless ways to report suspicious emails that in turn trigger automated response workflows are critical as well.

Attackers meet users where they are. So must your security.

While email is the dominant attack vector, attackers and phishing attacks will go where users collaborate and communicate and keep their sensitive information. As forms of sharing, collaboration and communication other than email, have become popular, attacks that target these vectors are increasing as well. For this reason, it is important to ensure that an organization’s anti-Phish strategy not just focus on email.

Ensure that the solution offers targeted protection capabilities for collaboration services that your organization uses. Capabilities like detonation that scan suspicious documents and links when shared are critical to protect users from targeted attacks. The ability in client applications to verify links at time-of-click offers additional protection regardless of how the content is shared with them. Look for solutions that support this capability.

Attackers don’t think in silos. Neither can the defenses.

Attackers target the weakest link in an organization’s defenses. They look for an initial compromise to get in, and once inside will look for a variety of ways increase the scope and impact of the breach. They typically achieve this by trying to compromise other users, moving laterally within the organization, elevating privileges when possible, and the finally reaching a system or data repository of critical value. As they proliferate through the organization, they will touch different endpoints, identities, mailboxes and services.

Reducing the impact of such attacks requires quick detection and response. And that can only be achieved when the defenses across these systems do not act in silos. This is why it is critical to have an integrated view into security solutions. Look for an email security solution that integrates well across other security solutions such as endpoint protection, CASB, identity protection, etc. Look for richness in integration that goes beyond signal integration, but also in terms of detection and response flows.

 

 

The post Top 6 email security best practices to protect against phishing attacks and business email compromise appeared first on Microsoft Security.

Guarding against supply chain attacks—Part 1: The big picture

October 16th, 2019 No comments

Every day, somewhere in the world, governments, businesses, educational organizations, and individuals are hacked. Precious data is stolen or held for ransom, and the wheels of “business-as-usual” grind to a halt. These criminal acts are expected to cost more than $2 trillion in 2019, a four-fold increase in just four years. The seeds that bloom into these business disasters are often planted in both hardware and software systems created in various steps of your supply chain, propagated by bad actors and out-of-date business practices.

These compromises in the safety and integrity of your supply chain can threaten the success of your business, no matter the size of your operation. But typically, the longer your supply chain, the higher the risk for attack, because of all the supply sources in play.

In this blog series, “Guarding against supply chain attacks,” we examine various components of the supply chain, the vulnerabilities they present, and how to protect yourself from them.

Defining the problem

Supply chain attacks are not new. The National Institute of Standards and Technology (NIST) has been focused on driving awareness in this space since 2008. And this problem is not going away. In 2017 and 2018, according to Symantec, supply chain attacks rose 78 percent. Mitigating this type of third-party risk has become a major board issue as executives now understand that partner and supplier relationships pose fundamental challenges to businesses of all sizes and verticals.

Moreover, for compliance reasons, third-party risk also continues to be a focus. In New York State, Nebraska, and elsewhere in the U.S., third-party risk has emerged as a significant compliance issue.

Throughout the supply chain, hackers look for weaknesses that they can exploit. Hardware, software, people, processes, vendors—all of it is fair game. At its core, attackers are looking to break trust mechanisms, including the trust that businesses naturally have for their suppliers. Hackers hide their bad intentions behind the shield of trust a supplier has built with their customers over time and look for the weakest, most vulnerable place to gain entry, so they can do their worst.

According to NIST, cyber supply chain risks include:

  • Insertion of counterfeits.
  • Unauthorized production of components.
  • Tampering with production parts and processes.
  • Theft of components.
  • Insertion of malicious hardware and software.
  • Poor manufacturing and development practices that compromise quality.

Cyber Supply Chain Risk Management (C-SCRM) identifies what the risks are and where they come from, assesses past damage and ongoing and future risk, and mitigates these risks across the entire lifetime of every system.

This process examines:

  • Product design and development.
  • How parts of the supply chain are distributed and deployed.
  • Where and how they are acquired.
  • How they are maintained.
  • How, at end-of-life, they are destroyed.

The NIST approach to C-SCRM considers how foundational practices and risk are managed across the whole organization.

Examples of past supply chain attacks

The following are examples of sources of recent supply chain attacks:

Hardware component attacks—When you think about it, OEMs are among the most logical places in a supply chain which an adversary will likely try to insert vulnerabilities. Moreover, these vulnerabilities can be inserted into the end product via physical access as a physical component is being transported or delivered, during pre-production access in a factory or manufacturing facility, via a technical insertion point, or other means.

Software component attacks—Again in 2016, Chinese hackers purportedly attacked TeamViewer software, which was a potential virtual invitation to view and access information on the computers of millions of people all over the world who use this program.

People perpetrated attacks—People are a common connector between the various steps and entities in any supply chain and are subject to the influence of corrupting forces. Nation-states or other “cause-related” organizations prey on people susceptible to bribery and blackmail. In 2016, the Indian tech giant, Wipro, had three employees arrested in a suspected security breach of customer records for the U.K. company TalkTalk.

Business processes—Business practices (including services), both upstream and downstream, are also examples of vulnerable sources of infiltration. For example, Monster.com experienced an exposed database when one of its customers did not adequately protect a web server storing resumes, which contain emails and physical addresses, along with other personal information, including immigration records. This and other issues can be avoided if typical business practices such as risk profiling and assessment services are in place and are regularly reviewed to make sure they comply with changing security and privacy requirements. This includes policies for “bring your own” IoT devices, which are another fast-growing vulnerability.

Big picture practical advice

Here’s some practical advice to take into consideration:

Watch out for copycat attacks—If a data heist worked with one corporate victim, it’s likely to work with another. This means once a new weapon is introduced into the supply chain, it is likely to be re-used—in some cases, for years.

To prove the point, here are some of the many examples of cybercrimes that reuse code stolen from legal hackers and deployed by criminals.

  • The Conficker botnet MS10-067 is over a decade old and is still found on millions of PCs every month.
  • The criminal group known as the Shadow Brokers used the Eternal Blue code designed by the U.S. National Security Agency as part of their cybersecurity toolkit. When the code was leaked illegally and sold to North Korea, they used it to execute WannaCry in 2017, which spread to 150 countries and infected over 200,000 computers.
  • Turla, a purportedly Russian group, has been active since 2008, infecting computers in the U.S. and Europe with spyware that establishes a hidden foothold in infected networks that searches for and steals data.

Crafting a successful cyberattack from scratch is not a simple undertaking. It requires technical know-how, resources to create or acquire new working exploits, and the technique to then deliver the exploit, to ensure that it operates as intended, and then to successfully remove information or data from a target.

It’s much easier to take a successful exploit and simply recycle it—saving development and testing costs, as well as the costs that come from targeting known soft targets (e.g., avoiding known defenses that may detect it). We advise you to stay in the know about past attacks, as any one of them may come your way. Just ask yourself: Would your company survive a similar attack? If the answer is no—or even maybe—then fix your vulnerabilities or at the very least make sure you have mitigation in place.

Know your supply chain—Like many information and operational technology businesses, you probably depend on a global system of suppliers. But do you know where the various technology components of your business come from? Who makes the hardware you use—and where do the parts to make that hardware come from? Your software? Have you examined how your business practices and those of your suppliers keep you safe from bad actors with a financial interest in undermining the most basic components of your business? Take some time to look at these questions and see how you’d score yourself and your suppliers.

Looking ahead

Hopefully, the above information will encourage (if not convince) you to take a big picture look at who and what your supply chain consists of and make sure that you have defenses in place that will protect you from all the known attacks that play out in cyberspace each day.

In the remainder of the “Guarding against supply chain attacks” series, we’ll drill down into supply chain components to help make you aware of potential vulnerabilities and supply advice to help you protect your company from attack.

Stay tuned for these upcoming posts:

  • Part 2—Explores the risks of hardware attacks.
  • Part 3—Examines ways in which software can become compromised.
  • Part 4—Looks at how people and processes can expose companies to risk.
  • Part 5—Summarizes our advice with a look to the future.

In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

To learn more about how you can protect your time and empower your team, check out the cybersecurity awareness page this month.

The post Guarding against supply chain attacks—Part 1: The big picture appeared first on Microsoft Security.

Microsoft’s 4 principles for an effective security operations center

October 15th, 2019 No comments

The Microsoft Cyber Defense Operations Center (CDOC) fields trillions of security signals every day. How do we identify and respond to the right threats? One thing that won’t surprise you: we leverage artificial intelligence (AI), machine learning, and automation to narrow the focus. But technology is not enough. Our people, culture, and process are just as critical.

You may not have trillions of signals to manage, but I bet you will still get a lot of value from a behind-the-scenes look at the CDOC. Even the small companies that I’ve worked with have improved the effectiveness of their security operations centers (SOCs) based on learnings from Microsoft.

Watch the operations episode of the CISO Spotlight Series—The people behind the cloud to get my take and a sneak peek at our team in action. In the video, I walk you through four principles:

  1. It starts with assessment.
  2. Invest in the right technology.
  3. Hire a diverse group of people.
  4. Foster an innovative culture.

It starts with assessment

Before you make any changes, it helps to identify the gaps in your current security system. Take a look at your most recent attacks to see if you have the right detections in place. Offense should drive your defenses. For example:

  • Has your organization been victim to password spray attacks?
  • Have there been brute force attacks against endpoints exposed to the internet?
  • Have you uncovered advanced persistent threats?

Understanding where your organization is vulnerable will help you determine what technology you need. If you need further help, I would suggest using the MITRE ATT&CK Framework.

Invest in the right technology

As you evaluate technology solutions, think of your security operations as a funnel. At the very top are countless threat signals. There is no way your team can address all of them. This leads to employee burnout and puts the organization at risk. Aim for automation to handle 20-25 percent of incoming events. AI and machine learning can correlate signals, enrich them with other data, and resolve known incidents.

Invest in good endpoint detection, network telemetry, a flexible security incident and event management system (SIEM) like Azure Sentinel, and cloud workload protection solutions. The right technology will reduce the volume of signals that filter down to your people, empowering them to focus on the problems that machines can’t solve.

Hire a diverse group of people

The people you hire matter. I attribute much of our success to the fact that we hire people who love to solve problems. You can model this approach in your SOC. Look for computer scientists, security professionals, and data scientists—but also try to find people with nontraditional backgrounds like military intelligence, law enforcement, and liberal arts. People with a different perspective can introduce creative ways of looking at a problem. For example, Microsoft has had a lot of success with veterans from the military.

I also recommend organizing your SOC into specialized, tiered teams. It gives employees a growth path and allows them to focus on areas of expertise. Microsoft uses a three-tiered approach:

  • Tier 1 analysts—These analysts are the front line. They manage the alerts generated by our SIEM and focus on high-speed remediation over a large number of events.
  • Tier 2 analysts—This team tackles alerts that require a deeper level of analysis. Many of these events have been escalated up from Tier 1, but Tier 2 analysts also monitor alerts to identify and triage the complex cases.
  • Tier 3 analysts—These are the threat hunters. They use sophisticated tools to proactively uncover advanced threats and hidden adversaries.

For a more detailed look at how Microsoft has structured our team, read Lessons learned from the Microsoft SOC—Part 2a: Organizing people

Foster an innovative culture

Culture influences SOC performance by guiding how people treat each other and approach their work. Well-defined career paths and roles are one way to influence your culture. People want to know how their work matters and contributes to the organization. As you build your processes and team, consider how you can encourage innovation, diversity, and teamwork.

Read how the CDOC creates culture in Lessons learned from the Microsoft SOC—Part 1.

Learn more

To learn more about how to run an effective SOC:

The post Microsoft’s 4 principles for an effective security operations center appeared first on Microsoft Security.

Patching as a social responsibility

October 9th, 2019 No comments

In the wake of the devastating (Not)Petya attack, Microsoft set out to understand why some customers weren’t applying cybersecurity hygiene, such as security patches, which would have helped mitigate this threat. We were particularly concerned with why patches hadn’t been applied, as they had been available for months and had already been used in the WannaCrypt worm—which clearly established a ”real and present danger.”

We learned a lot from this journey, including how important it is to build clearer industry guidance and standards on enterprise patch management. To help make it easier for organizations to plan, implement, and improve an enterprise patch management strategy, Microsoft is partnering with the U.S. National Institute of Standards and Technology (NIST) National Cybersecurity Center of Excellence (NCCoE).

NIST and Microsoft are extending an invitation for you to join this effort if you’re a:

  • Vendor—Any vendor who has technology offerings to help with patch management (scan, report, deploy, measure risk, etc.).
  • Organization or individual—All those who have tips and lessons learned from a successful enterprise management program (or lessons learned from failures, challenges, or any other situations).

If you have pertinent learnings that you can share, please reach out to cyberhygiene@nist.gov.

During this journey, we also worked closely with additional partners and learned from their experience in this space, including the:

  • Center for Internet Security (CIS)
  • U.S. Department of Homeland Security (DHS) Cybersecurity
  • Cybersecurity and Infrastructure Security Agency (CISA) (formerly US-CERT / DHS NCCIC)

A key part of this learning journey was to sit down and listen directly to our customer’s challenges. Microsoft visited a significant number of customers in person (several of which I personally joined) to share what we learned—which became part of the jointly endorsed mitigation roadmap—and to have some really frank and open discussions to learn why organizations really aren’t applying security patches.

While the discussions mostly went in expected directions, we were surprised at how many challenges organizations had on processes and standards, including:

  • “What sort of testing should we actually be doing for patch testing?”
  • “How fast should I be patching my systems?”

This articulated need for good reference processes was further validated by observing that a common practice for “testing” a patch before a deployment often consisted solely of asking whether anyone else had any issues with the patch in an online forum.

This realization guided the discussions with our partners towards creating an initiative in the NIST NCCoE in collaboration with other industry vendors. This project—kicking off soon—will build common enterprise patch management reference architectures and processes, have relevant vendors build and validate implementation instructions in the NCCoE lab, and share the results in the NIST Special Publication 1800 practice guide for all to benefit.

Applying patches is a critical part of protecting your system, and we learned that while it isn’t as easy as security departments think, it isn’t as hard as IT organizations think.

In many ways, patching is a social responsibility because of how much society has come to depend on technology systems that businesses and other organizations provide. This situation is exacerbated today as almost all organizations undergo digital transformations, placing even more social responsibility on technology.

Ultimately, we want to make it easier for everyone to do the right thing and are issuing this call to action. If you’re a vendor that can help or if you have relevant learnings that may help other organizations, please reach out to cyberhygiene@nist.gov. Now!

To learn more about how you can protect your time and empower your team, check out the cybersecurity awareness page this month.

The post Patching as a social responsibility appeared first on Microsoft Security.

Designing a COM library for Rust

October 8th, 2019 No comments

I interned with Microsoft as a Software Engineering Intern in the MSRC UK team in Cheltenham this past summer. I worked in the Safe Systems Programming Language (SSPL) group, which explores safe programming languages as a proactive measure against memory-safety related vulnerabilities. This blog post describes the project that I have been working on under …

Designing a COM library for Rust Read More »

The post Designing a COM library for Rust appeared first on Microsoft Security Response Center.

October 2019 security updates are available!

October 8th, 2019 No comments

We have released the October security updates to provide additional protections against malicious attackers. As a best practice, we encourage customers to turn on automatic updates. More information about this month’s security updates can be found in the Security Update Guide. As a reminder, Windows 7 and Windows Server 2008 R2 will be out of …

October 2019 security updates are available! Read More »

The post October 2019 security updates are available! appeared first on Microsoft Security Response Center.