Archive for the ‘Secured-core PCs’ Category

Force firmware code to be measured and attested by Secure Launch on Windows 10

September 1st, 2020 No comments

You cannot build something great on a weak foundation – and security is no exception.

Windows is filled with important security features like Hypervisor-protected code integrity (HVCI) and Windows Defender Credential Guard that protect users from advanced hardware and firmware attacks. For these features to properly do their jobs, the platform’s firmware and hardware must be trustworthy and healthy, otherwise the chain of trust that verifies the integrity of the system by validating that every component in the boot process is cryptographically signed by a trusted source could be tampered with maliciously, thereby compromising the security of operating system features that use the firmware and hardware as a fundamental building block.

Without these detection and prevention capabilities, the system won’t be able to detect and block malicious software that runs before the operating system initialized, or during the boot process itself. The malicious software could then grant itself elevated privileges, expand foothold, and persist on the system undetected. In the case of Secured-core PCs, Secure Launch, which leverages the principle of Dynamic Root of Trust for Measurement (DRTM), is a technology that is built-in and enabled by default to greatly increase protection from these sophisticated boot attacks. By leveraging built-in silicon instructions or firmware enclaves, Secure Launch allows a system to freely boot untrusted code initially, but shortly after launches the system into a trusted state by taking control of the CPUs and forcing any untrusted code down a well-known and measured code path to verify it isn’t malicious. This removes early Unified Extensible Firmware Interface (UEFI) code from the trust boundary, meaning systems are better protected against bugs or exploits in UEFI after the Secure Launch, combating an entire class of threat.

For some time, Windows devices have been able to leverage a hardware-based root of trust to help ensure unauthorized firmware or software does not take root before the Windows bootloader launches. This root of trust comes from a UEFI feature called Secure Boot. Secure Boot leverages a Trusted Platform Module (TPM) to take cryptographic measurements of each piece of firmware or software during the early boot process. This technique of measuring these static early boot UEFI components is called the Static Root of Trust for Measurement (SRTM).

As there are thousands of PC vendors that produce numerous models with different UEFI BIOS versions, there becomes an incredibly large number of SRTM measurements at startup. There are two techniques that can be used to establish trust here —either maintain a list of known ‘bad’ SRTM measurements (a block list), or a list of known ‘good’ SRTM measurements (an allow list). Each option has a drawback:

  1. A list of known ‘bad’ SRTM measurements allows a hacker to change just 1 bit in a component to create an entirely new SRTM hash that needs to be listed. This means that the SRTM flow is inherently brittle – a minor change can invalidate the chain of trust.
  2. A list of known ‘good’ SRTM measurements requires each new BIOS/PC combination measurement to be carefully added, which can be slow. In addition, a bug fix for UEFI code can take a long time to design, build, retest, validate, and redeploy.

As Windows relies on the Hypervisor being secure, trusted, and unmodified to implement numerous security technologies, it is important to protect it from any potential threats that can arise from these issues. System Guard Secure Launch was designed and introduced in Windows 10 version 1809 to address these drawbacks.

Leveraging a Dynamic Root of Trust to measure code integrity

Secure Launch is the first line of defense against exploits and vulnerabilities that try to take advantage of early-boot flaws or bugs. Firmware enclaves and built-in silicon instructions allow systems to boot into a trusted state by forcing untrusted, exploitable code down a specific and measured path before launching into a trusted state.

To achieve a security boundary between the UEFI/ firmware and later OS code, the Windows boot environment is divided into two phases. The first phase runs with UEFI and leverages boot services that are considered untrusted for Secure Launch, and the second phase is the trusted portion that runs without firmware services after the DRTM event. This trusted phase is referred to as the Trusted Computing Base (TCB) launch phase. The Trusted Computing Base includes the minimally scoped firmware enclave and hardware necessary to perform a DRTM event.

The phase with firmware support utilizes the traditional boot binaries Boot Manager and Winload. In this model, Winload no longer prepares the OS and its data structures but acts to prepare enough data in memory for the TCB phase of the boot environment to be able to operate without firmware. This includes loading all unexpanded binaries needed for the OS in memory, as well as staging other firmware or disk sourced information. All data, binaries, and associated storage structures are validated by the TCB before use.

The TCB phase of the boot environment is started by the new TCB Launch application. This binary is measured into the DRTM TPM registers and starts the chain of trust for the launched OS. TCB Launch ensures the security of the system, and then prepares the OS for execution by loading and validating all binaries as well as building data structures for OS launch.

Although all OS data is sourced from disk by Winload and firmware, the TCB phase validates all signatures and code integrity before use. TCB Launch itself is not directly code integrity checked by this phase, but the root of trust measurement provided by the DRTM event is used to attest the authenticity of the binary. For the TCB phase of boot to continue to be secure, the following state must be verified by the DRTM event and TCB Launch:

  1. Continuous protection against Direct Memory Access (DMA) of TCB Launch and OS memory
  2. Hardware description of RAM is accurate
  3. Security critical hardware description must be validated, such as IOMMU structures
  4. Memory will be cleared upon an unexpected reset from the TCB

After TCB Launch, control of the DRTM environment and associated controls are transferred to the Hypervisor. The Hypervisor is then responsible for managing DMA protections, memory clearing protections, and other DRTM- related state control.

DRTM allows the platform to mitigate real-world attacks that attempt to modify the hypervisor or perform other malicious actions during early boot/hibernate. For example, an S3 boot script exploit that attempts to tamper the hypervisor across suspend/resume would be mitigated by DRTM.

Another common tool used to perform DMA style read/writes over PCIe, frequently leveraged by attackers, is PCILeech. While Kernel DMA protections help ensure that malicious, unauthorised peripherals cannot access memory, even if an attacker does gain a foothold in early-boot, pre-DRTM firmware, the DRTM event insulates the Windows environment from these exploits.

System Management Mode isolation protections can help enforce conditional access

Another dimension of protection that comes with Secured-core PCs is System Management Mode (SMM) protection. System Management Mode (SMM) is a special-purpose CPU mode in x86 microcontrollers that handles power management, hardware configuration, thermal monitoring, and anything else the manufacturer deems useful. If an attacker can exploit SMM, they could attempt to bypass some of the checks in Secure Launch or exploit the runtime operating system. By leveraging new hardware-based supervision and attestation, Secured-core PCs can measure and detect when SMM is trying to access a platform resource (like memory, IO, or certain CPU registers) which violates our policy. This adds an additional layer of hardening to the Secure Launch event and an additional layer of hardening to Secured-core PCs.

SMM execution takes place in the form of System Management Interrupt (SMI) handlers. During the DRTM event, SMIs will be suspended to allow the DRTM event and beginning of the TCB to execute without SMM interference. A system’s SMM isolation is based on an access policy provided by the platform firmware stating what SMM does or does not require access to. This policy will then be enforced on SMM by the silicon vendor specific mechanism, and a copy of this policy will be provided to the boot loader for evaluation. TCB Launch will check that the provided isolation policy being enforced on the system meets the minimum Windows requirements. If the policy is not compliant, say for being able to access OS memory, then TCB Launch may destroy DRTM state and clear OS secrets. TCB Launch will resume SMIs after it has completed its evaluation and has taken any necessary precautions.

Exploits that previously looked to leverage SMM vulnerabilities to read/write these sensitive resources like memory, IO, or certain CPU registers to access secrets, or potentially modify the Hypervisor, are no longer permitted access as part of the policy evaluation. A detected violation upon boot will destroy the DRTM state and prevent access from previously sealed OS secrets and keys. Microsoft has worked with silicon partners and OEMs to ensure that capable Secured-core devices have SMM authored in such a way that meets the SMM policy described, hardening them against this class of attacks. The strength of the ecosystem partnership between Microsoft, silicon vendors and OEMs helps take the security burden of protecting SMM off of security operations teams and recent attacks leveraging SMI handler vulnerabilities are examples of the types of scenarios mitigated by the described SMM protections. When the exploit attempts to leverage a bug in the system management interrupt handler to gain code execution privileges in SMM and modify OS memory, the attempted OS memory access would fall outside our policy boundary and be flagged in the attestation report. The state of DRTM and the SMM protections can be used to help strengthen conditional access strategies in organizations by gating access to sensitive resources based on the health of these hardware and firmware security features.

AMD’s SMM protection component also leverages an SMM supervisor running at a higher processor privilege level (CPL0) to execute SMI handler logic at a lower processor privilege level (CPL3) to isolate and protect resources from SMI handler access and even itself from tampering. Fault handlers are used to protect IO ports & MSRs and enforces CR3 lockdown to protect memory & MMIO components. SMM Supervisor is cryptographically signed and authenticated as well as measured into PCR[17] during SKINIT launch. OEMs include support for SKINIT and AMD’s SMM protections by including the necessary packages in the OS images that are applied to Secured-core PCs.

Getting started with Secure Launch and SMM Protections

Enabling System Guard Secure Launch on a platform may be achieved when the following support is present:

  • Intel, AMD, or ARM virtualization extensions
  • Trusted Platform Module (TPM) 2.0
  • On Intel: TXT support in the BIOS
  • On AMD: SKINIT package must be integrated in the Windows system image
  • On Qualcomm: Implements DRTM TrustZone application and supports SMC memory protections.
  • Kernel DMA Protection

Further configuration information and requirements can be found here. On secured-core PCs, virtualization-based security is supported and hardware-backed security features like System Guard Secure Launch with SMM Protections are enabled by default. Learn more about the line of secured-core PCs available today.


Nazmus Sakib

Enterprise and OS Security

The post Force firmware code to be measured and attested by Secure Launch on Windows 10 appeared first on Microsoft Security.

Introducing Kernel Data Protection, a new platform security technology for preventing data corruption

July 8th, 2020 No comments

Attackers, confronted by security technologies that prevent memory corruption, like Code Integrity (CI) and Control Flow Guard (CFG), are expectedly shifting their techniques towards data corruption. Attackers use data corruption techniques to target system security policy, escalate privileges, tamper with security attestation, modify “initialize once” data structures, among others.

Kernel Data Protection (KDP) is a new technology that prevents data corruption attacks by protecting parts of the Windows kernel and drivers through virtualization-based security (VBS). KDP is a set of APIs that provide the ability to mark some kernel memory as read-only, preventing attackers from ever modifying protected memory. For example, we’ve seen attackers use signed but vulnerable drivers to attack policy data structures and install a malicious, unsigned driver. KDP mitigates such attacks by ensuring that policy data structures cannot be tampered with.

The concept of protecting kernel memory as read-only has valuable applications for the Windows kernel, inbox components, security products, and even third-party drivers like anti-cheat and digital rights management (DRM) software. On top of the important security and tamper protection applications of this technology, other benefits include:

  • Performance improvements – KDP lessens the burden on attestation components, which would no longer need to periodically verify data variables that have been write-protected
  • Reliability improvements – KDP makes it easier to diagnose memory corruption bugs that don’t necessarily represent security vulnerabilities
  • Providing an incentive for driver developers and vendors to improve compatibility with virtualization-based security, improving adoption of these technologies in the ecosystem

KDP uses technologies that are supported by default on Secured-core PCs, which implement a specific set of device requirements that apply the security best practices of isolation and minimal trust to the technologies that underpin the Windows operating system. KDP enhances the security provided by the features that make up Secured-core PCs by adding another layer of protection for sensitive system configuration data.

In this blog we’ll share technical details about how Kernel Data Protection works and how it’s implemented on Windows 10, with the goal of inspiring and empowering driver developers and vendors to take full advantage of this technology designed to tackle data corruption attacks.

Kernel Data Protection: An overview

In VBS environments, the normal NT kernel runs in a virtualized environment called VTL0, while the secure kernel runs in a more secure and isolated environment called VTL1. More details on VBS and the secure kernel are available on Channel 9 here and here. KDP is intended to protect drivers and software running in the Windows kernel (i.e., the OS code itself) against data-driven attacks. It is implemented in two parts:

  • Static KDP enables software running in kernel mode to statically protect a section of its own image from being tampered with from any other entity in VTL0.
  • Dynamic KDP helps kernel-mode software to allocate and release read-only memory from a “secure pool”. The memory returned from the pool can be initialized only once.

The memory managed by KDP is always verified by the secure kernel (VTL1) and protected using second level address translation (SLAT) tables by the hypervisor. As a result, no software running in the NT kernel (VTL0) will ever be able to modify the content of the protected memory.

Both dynamic and static KDP, which are already available in the latest Windows 10 Insider Build and work with any kind of memory, except for executable pages. Protection for executable pages is already provided by hypervisor-protected code integrity (HVCI), which prevents any non-signed memory from being ever executable, granting the W^X (a page that is either writable or executable, but never both) condition. HVCI and the W^X conditions are not explained in this article (refer to the new upcoming Windows Internals book for further details).

Static KDP

A driver that wants a section of its image protected through static KDP should call the MmProtectDriverSection API, which has the following prototype:

NTSTATUS MmProtectDriverSection (PVOID AddressWithinSection, SIZE_T Size, ULONG Flags)

A driver specifies an address located inside a data section and, optionally, the size of the protected area and some flags. As of this writing, the “size” parameter is reserved for future use: the entire data section where the address resides will always be protected by the API.

In case the function succeeds, the memory backing the static section becomes read-only for VTL0 and protected through the SLAT. Unloading a driver that has a protected section is not allowed; attempting to do so will result in, by design, a blue screen error. However, we know that sometimes a driver should be able to be unloaded. Therefore, we have introduced the MM_PROTECT_DRIVER_SECTION_ALLOW _UNLOAD flag (1). If the caller specifies it, the system will be able to unload the target driver, which means that in this case, the protected section will be first unprotected and then released by NtUnloadDriver.

Dynamic KDP

Dynamic KDP allows a driver to allocate and initialize read-only memory using services provided by a secure pool, which is managed by the secure kernel. The consumer first creates a secure pool context associated with a tag. All of the consumer’s future memory allocations will be associated with the created secure pool context. After the context is created, read-only allocations can be performed through a new extended parameter to the ExAllocatePool3 API:

PVOID ExAllocatePool3 (POOL_FLAGS Flags, SIZE_T NumberOfBytes, ULONG Tag, PCPOOL_EXTENDED_PARAMETER ExtendedParameters, ULONG Count);

The caller can then specify the size of the allocation and the initial buffer from where to copy the memory in a POOL_EXTENDED_PARAMS_SECURE_POOL data structure. The returned memory region can’t be modified by any entity running in VTL0. In addition, at allocation time, the caller supplies a tag and a cookie value, which are encoded and embedded into the allocation. The consumer can, at any time, validate that an address is within the memory range reserved for dynamic KDP allocations and that the expected cookie and tag are in fact encoded into a given allocation. This allows the caller to check that their pointer to a secure pool allocation has not been switched with a different allocation.

Similar to static KDP, by default the memory region can’t be freed or modified. The caller can specify at allocation time that the allocation is freeable or modifiable using the SECURE_POOL_FLAGS_FREEABLE (1) and SECURE_POOL_FLAG_MODIFIABLE(2) flags. Using these flags reduces the security of allocation but allows dynamic KDP memory to be used in scenarios where leaking all allocations would be infeasible, such as allocations which are made per process on the machine.

Implementing KDP on Windows 10

As mentioned, both static KDP and dynamic KDP rely on the physical memory being protected by the SLAT in the hypervisor. When a processor supports the SLAT, it uses another layer for memory address translation. (Note: AMD implements the SLAT through “nested page tables”, while Intel uses the term “extended page tables”.)

The second-level address translation (SLAT)

When the hypervisor enables SLAT support, and a VM is executing in VMX non-root mode, the processor translates an initial virtual address called Guest Virtual Address (GVA, or stage 1 virtual address in ARM64) in an intermediate physical address called Guest Physical Address (GPA, or IPA in ARM64). This translation is still managed by the page tables, addressed by the CR3 control register managed by the Guest OS. The final result of the translation yields back to the processor a GPA, with the access protection specified in the guest page tables. Note that only software operating in kernel mode can interact with page tables. A rootkit usually operates in kernel mode and can indeed modify the protection of the intermediate physical pages.

The hypervisor helps the processor to translate the GPA using the extended (or nested) page tables. On non-SLAT systems, when a virtual address is not present in the TLB, the processor needs to consult all the page tables in the hierarchy to reconstruct the final physical address. As shown in Figure 1, the virtual address is split into four parts (on LA48 systems). Each part represents the index into a page table of the hierarchy. The physical address of the initial PML4 table is specified by the CR3 register. This explains why the processor is always able to translate the address and get the next physical address of the next table in the hierarchy. It’s important to note that in each page table entry of the hierarchy, the NT kernel specifies a page protection through a set of attributes. The final physical address is accessible only if the sum of the protections specified in each page table entry allows it.

Diagram showing X64 stage 1 address translation from virtual address to guest physical address

Figure 1. The X64 Stage 1 address translation (Virtual address to guest physical address)

When the SLAT is on, the intermediate physical address specified in the guest’s CR3 register needs to be translated to a real system physical address (SPA). The mechanism is similar: the hypervisor configures the nCR3 field of the active virtual machine control block (VMCB) representing the currently executing VM to the physical address of the nested (or extended) page tables (note that the field is called “EPT pointer” in the Intel architecture). The nested page tables are built in a similar way to standard page tables, so the processor needs to scan the entire hierarchy to find the correct physical address, as illustrated in figure 2. In the figure, “n” indicates nested page tables in the hierarchy, which is managed by the hypervisor, while “g” indicates guest page tables, which is managed by the NT Kernel.

Diagram showing X64 stage 2 physical address translation from GPA to SPA

Figure 2. The X64 Stage 2 physical address translation (GPA to SPA)

As shown, the final translation of a guest virtual address to a system physical address goes through two translation types: GVA to GPA, configured by the guest VM’s kernel, and GPA to SPA, configured by the hypervisor. Note that in the worst case, the translation involves all four page hierarchy levels, which results in 20 table lookups. The mechanism could be slow and is mitigated by processor support for an enhanced TLB. In the TLB entries, another ID that identifies the currently executing VM is included (called virtual processor identifier or VPID in Intel systems, address space ID or ASID in AMD systems), so the processor can cache the translation result of a virtual address belonging to two different VMs without any collision.

Diagram showing nested entry of an NPT page table in the hierarchy

Figure 3. Nested entry of an NPT page table in the hierarchy

As highlighted in Figure 3, an NPT entry specifies multiple access protection attributes. This allows the hypervisor to further protect the system physical address (the NPT cannot be accessed by any other entity except for the hypervisor itself). When the processor attempts to read, write, or run an address to which the NPTs disallow access, an NPT violation (EPT violation in Intel architecture) is raised, and a VM exit is generated. A VM exit generated by NTP violation does not happen frequently. In general, it is produced in nested configurations or when Software MBEC is in use for HVCI. If the NPT violation happens for other reasons, the Microsoft Hypervisor injects an access violation exception to the current virtual processor (VP), which is managed by the guest OS in different ways but typically through a bug check if no exception handler elects to handle the exception.

Static KDP implementation

The SLAT protection is the main principle that allows KDP to exist. In Windows, dynamic and static KDP implementations are similar and are managed by the secure kernel. The secure kernel is the only entity that is able to emit the ModifyVtlProtectionMask hypercall to the hypervisor with the goal of modifying the SLAT access protection for a physical page mapped in the lower VTL0.

For static KDP, the NT kernel verifies that the driver is not a session driver or mapped with large pages. If one of these conditions exists, or if the section is a discardable section, static KDP can’t be applied. If the entity that called the MmProtectDriverSection API did not request the target image to be unloadable, the NT kernel performs the first call into the secure kernel, which pins the normal address range (NAR) associated with the driver. The “pinning” operation prevents the address space of the driver from being reused, making the driver not unloadable. The NT kernel then brings all the pages that belong to the section in memory and makes them private (i.e., not addressed by the prototype PTEs). The pages are then marked as read-only in the leaf PTE structures (highlighted as “gPTE” in figure 2). At this stage, the NT kernel can finally call the secure kernel for protecting the underlying physical pages through the SLAT. The secure kernel applies the protection in two phases:

  1. Register all the physical pages that belong to the section and mark them as “owned by VTL0” by adding the proper NTEs (normal table addresses) in the database and updating the underlying secure PFNs, which belong to VTL1. This allows the secure kernel to track the physical pages, which still belong to the NT kernel.
  2. Apply the read-only protection to the VTL0 SLAT table. The hypervisor uses one SLAT table and VMCB per VTL.

The target image’s section is now protected. No entity in VTL0 will be able to write to any of the pages belonging to the section. As highlighted, the secure kernel in this scenario has protected some memory pages that were initially allocated by the NT kernel in VTL0.

Dynamic KDP implementation

Dynamic KDP uses services provided by the new segment heap for allocating memory from a secure pool, which is almost entirely managed by the secure kernel.

In early phases of its boot process, the NT memory manager calculates the randomized virtual base address of a 512GB region used for the secure pool, which spans exactly one of the 256 kernel PML4 entries. Later in phase 1, the NT memory manager emits a secure call, internally named  INITIALIZE_SECURE_POOL, which includes the calculated memory region and allows the secure kernel to initialize the secure pool.

The secure kernel creates a NAR representing the entire 512GB virtual region belonging to the unsecure NT kernel, and initializes all the relative NTEs belonging to the NAR. The secure pool virtual address space in the secure kernel is 256GB wide, which means that the PML4 mapping it is shared with some other content and is not at the same base address compared to the NT one. So, while initializing the secure pool descriptor, the secure kernel also calculates a delta value, which is the difference between the secure pool base address in the secure kernel and the one reserved in the NT kernel (as shown in figure 4). This is important, because it allows the secure kernel to specify to the NT kernel where to map a physical page belonging to the secure pool.

Diagram showing the secure pool from VTL1 to VT0 delta

Figure 4. The Secure Pool VTL1 to VTL 0 DELTA value.

When software running in VTL0 kernel requests some memory to be allocated from the secure pool, a secure call is made to the secure kernel, which invokes the internal RtlpHpAllocateHeap heap function, which is exposed in both VTLs. If the segment heap calculates that there are no more free memory segments left in the secure pool, it calls the SkmmAllocatePoolMemory routine, which allocates new memory pages for the pool. The heap always tries to avoid committing new memory pages if it doesn’t really need to.

Like the NtAllocateVirtualMemory API, which is exposed by the NT kernel, the SkmmAllocatePoolMemory API supports two kinds of operations: reserve and commit. A reserve operation allows the secure kernel’s memory manager to reserve some PTEs needed for the pool allocation. A commit operation actually allocates free physical pages.

Physical pages are allocated from a bundle of free pages that belong to the secure kernel, whose secure PFNs are in the secure state, and mapped in the VTL 1’s page table, which means that all the VTL 1 paging table hierarchy are allocated. Like static KDP, the secure kernel sends the “ModifyVtlProtectionMask” hypercall to the hypervisor, with the goal of mapping the physical pages as read-only in the VTL0 SLAT table. After the pages become accessible to VTL0, the secure kernel copies the data specified by the caller and calls back NT.

The NT kernel uses services provided by the memory manager to map the guest physical pages in VTL0. Remember that the entire root partition physical address space of both VTL0 and VTL1 is mapped with the identity mapping, meaning that a guest physical page number valid in VTL0 is also valid in VTL1. The secure kernel asks the NT memory manager to map a page belonging to the secure pool by knowing exactly which virtual address the page should be mapped to. This is thanks to the delta value calculated previously in phase 1 (figure 4).

The allocation is returned to the caller in VTL0. The underlaying pages, as with static KDP, are no more writable from any entity in VTL0.

Astute readers will note that the above description of KDP deals only with establishing SLAT protections for the guest physical address(es) backing a given protected memory region. KDP does not enforce how the virtual address range mapping a protected region is translated. Today, the secure kernel verifies only on a periodic basis that protected memory regions translate to the appropriate, SLAT-protected GPA. The design of KDP permits the possibility of future extensions to assert more direct control over the address translation hierarchy of protected memory regions.

Applications of KDP on inbox components

To demonstrate how KDP can provide value two inbox components, we’re highlighting how it’s implemented in CI.dll, the code integrity engine in Windows, and the Windows Defender System Guard runtime attestation engine.

First, CI.dll. The goal of using KDP is to protect internal policy state after it has been initialized (i.e., read from the registry or generated at boot time). These data structures are critical to protect as if they are tampered with—a driver that is properly signed but vulnerable could attack the policy data structures and then install an unsigned driver on the system. With KDP, this attack is mitigated by ensuring the policy data structures cannot be tampered with.

Second, Windows Defender System Guard. To provide runtime attestation, the attestation broker is only allowed to connect to the attestation driver one time. This is because the state is stored in VTL1 memory. The driver stores the connection state in its memory and this needs to be protected to prevent an attack from trying to reset the connection with a potentially tampered with broker agent. KDP can lock these variables and ensure that only a single connection between the broker and driver can be established.

Code integrity and Windows Defender System Guard are two of the critical features of Secured-core PCs. KDP enhances protection for these vital security systems and raise the bar that attackers need to overcome to compromise Secured-core PCs.

These are just a few examples of how useful protecting kernel and driver memory as read-only can be for the security and integrity of the system. As KDP is adopted more broadly, we expect to be able to expand the scope of protection as we look to protect against data corruption attacks more broadly.

Getting started with KDP

Both dynamic and static KDP do not have any further requirements other than the ones needed for running virtualization-based security. In ideal conditions, VBS can be started on any computer that supports:

  • Intel, AMD or ARM virtualization extensions
  • Second-level address translation: NPT for AMD, EPT for Intel, Stage 2 address translation for ARM
  • Optionally, hardware MBEC, which reduces the performance cost associated with HVCI

More info on the requirements for VBS can be found here. On Secured-core PCs, virtualization-based security is supported and hardware-backed security features are enabled by default. Customers can find Secured-core PCs from a variety of partner vendors that feature the comprehensive Secured-core security features that are now enhanced by KDP.


Andrea Allievi

Security Kernel Core Team

The post Introducing Kernel Data Protection, a new platform security technology for preventing data corruption appeared first on Microsoft Security.

Secured-core PCs help customers stay ahead of advanced data theft

May 13th, 2020 No comments

Researchers at the Eindhoven University of Technology recently revealed information around “Thunderspy,” an attack that relies on leveraging direct memory access (DMA) functionality to compromise devices. An attacker with physical access to a system can use Thunderspy to read and copy data even from systems that have encryption with password protection enabled.

Secured-core PCs provide customers with Windows 10 systems that come configured from OEMs with a set of hardware, firmware, and OS features enabled by default, mitigating Thunderspy and any similar attacks that rely on malicious DMA.

How Thunderspy works

Like any other modern attack, “Thunderspy” relies on not one but multiple building blocks being chained together to deliver protection from these kinds of targeted attacks. Below is a summary of how Thunderspy can be used to access a system where the attacker does not have the password needed to sign in. A video from the Thunderspy research team showing the attack is available here.

Step 1: A serial peripheral interface (SPI) flash programmer called Bus Pirate is plugged into the SPI flash of the device being attacked. This gives access to the Thunderbolt controller firmware and allows an attacker to copy it over to the attacker’s device

Step 2: The Thunderbolt Controller Firmware Patcher (tcfp), which is developed as part of Thunderspy, is used to disable the security mode enforced in the Thunderbolt firmware copied over using the Bus Pirate device in Step 1

Step 3: The modified insecure Thunderbolt firmware is written back to the SPI flash of the device being attacked

Step 4: A Thunderbolt-based attack device is connected to the device being attacked, leveraging the PCILeech tool to load a kernel module that bypasses the Windows sign-in screen

Diagram showing how the Thunderspy attack works

The result is that an attacker can access a device without knowing the sign-in password for the device. This means that even if a device was powered off or locked by the user, someone that could get physical access to the device in the time it takes to run the Thunderspy process could sign in and exfiltrate data from the system or install malicious software.

Secured-core PC protections

In order to counteract these targeted, modern attacks, Secured-core PCs use a defense-in-depth strategy that leverage features like Windows Defender System Guard and virtualization-based security (VBS) to mitigate risk across multiple areas, delivering comprehensive protection against attacks like Thunderspy.

Mitigating Steps 1 to 4 of the Thunderspy attack with Kernel DMA protection

Secured-core PCs ship with hardware and firmware that support Kernel DMA protection, which is enabled by default in the Windows OS. Kernel DMA protection relies on the Input/Output Memory Management Unit (IOMMU) to block external peripherals from starting and performing DMA unless an authorized user is signed in and the screen is unlocked. Watch this video from the 2019 Microsoft Ignite to see how Windows mitigates DMA attacks.

This means that even if an attacker was able to copy a malicious Thunderbolt firmware to a device, the Kernel DMA protection on a Secured-core PC would prevent any accesses over the Thunderbolt port unless the attacker gains the user’s password in addition to being in physical possession of the device, significantly raising the degree of difficulty for the attacker.

Hardening protection for Step 4 with Hypervisor-protected code integrity (HVCI)

Secured-core PCs ship with hypervisor protected code integrity (HVCI) enabled by default. HVCI utilizes the hypervisor to enable VBS and isolate the code integrity subsystem that verifies that all kernel code in Windows is signed from the normal kernel. In addition to isolating the checks, HVCI also ensures that kernel code cannot be both writable and executable, ensuring that unverified code does not execute.

HVCI helps to ensure that malicious kernel modules like the one used in Step 4 of the Thunderspy attack cannot execute easily as the kernel module would need to be validly signed, not revoked, and not rely on overwriting executable kernel code.

Modern hardware to combat modern threats

A growing portfolio of Secured-core PC devices from the Windows OEM ecosystem are available for customers. They provide a consistent guarantee against modern threats like Thunderspy with the variety of choices that customers expect to choose from when acquiring Windows hardware. You can learn more here:


Nazmus Sakib

Enterprise and OS Security 

The post Secured-core PCs help customers stay ahead of advanced data theft appeared first on Microsoft Security.

Secured-core PCs: A brief showcase of chip-to-cloud security against kernel attacks

March 17th, 2020 No comments

Gaining kernel privileges by taking advantage of legitimate but vulnerable kernel drivers has become an established tool of choice for advanced adversaries. Multiple malware attacks, including RobbinHood, Uroburos, Derusbi, GrayFish, and Sauron, and campaigns by the threat actor STRONTIUM, have leveraged driver vulnerabilities (for example, CVE-2008-3431, CVE-2013-3956, CVE-2009-0824, CVE-2010-1592, etc.) to gain kernel privileges and, in some cases, effectively disable security agents on compromised machines.

Defending against these types of threats—whether those that live off the land by using what’s already on the machine or those that bring in vulnerable drivers as part of their attack chain—requires a fresh approach to security, one that combines threat defense on multiple levels: silicon, operating system, and cloud. Microsoft brought this chip-to-cloud approach with Azure Sphere, the integrated security solution for IoT devices and equipment. We brought the same approach to securing endpoint devices through Secured-core PCs.

Secured-core PCs combine virtualization, operating system, and hardware and firmware protection. Along with Microsoft Defender Advanced Threat Protection, Secured-core PCs provide end-to-end protection against advanced threats.

Hardware profile guaranteed to support the latest hardware-backed security features

Microsoft worked internally and externally with OEM partners Lenovo, HP, Dell, Panasonic, Dynabook, and Getac to introduce a new a class of devices, Secured-core PCs. Secured-core PCs address the need for customers to perform the complex decision flow of mapping which security feature (e.g., hypervisor-protected code integrity (HVCI), virtualization-based security (VBS), Windows Defender Credential Guard) are supported by which hardware (e.g., TPM 1.0, 2.0, etc.).

With Secured-core PCs, customers no longer need to make this complex decision; they’re assured that these devices support the latest hardware-backed security features.

Hardware-backed security features enabled by default

Secured-core PCs have the hardware-backed security featured enabled by default, removing the need for customers to test and enable these features, which require a combination of BIOS and OS settings changes.

Because both BIOS settings and OS settings are enabled out of the box with these devices, the burden to enable these features onsite is removed for customers. The following hardware-backed security features are enabled by default on any Secured-core PC:


Security promise Technical features
Protect with hardware root of trust TPM 2.0 or higher
TPM support enabled by default
Virtualization-based security (VBS) enabled
Defend against firmware attack Windows Defender System guard enabled
Defend against vulnerable and malicious drivers Hypervisor-protected code integrity (HVCI) enabled
Defend against unverified code execution Arbitrary code generation and control flow hijacking protection [CFG, xFG, CET, ACG, CIG, KDP] enabled
Defend against limited physical access, data attacks Kernel DMA protection enabled
Protect identities and secrets from external threats Credential Guard enabled

While some of these features have previously existed, customers had the burden of (1) choosing the right hardware profile that supported all of these features and (2) enabling these features on their devices. With Secured-core PCs, these hardware-backed security features are assured to work on the hardware and are enabled by default.

Advanced security features: Secure device risk, anti-tampering, driver control, firmware control, supply-chain interdiction, and more

The hardware-backed security features that are enabled by default, along with a combination of Secured-core services, seamlessly integrate with Microsoft Defender ATP, lighting up additional security scenarios and providing unified protection against the entire attack chain.

In this blog, we will showcase how Secured-core PC features deliver strong driver controls that protects against threats that use vulnerable drivers to elevate privilege, using the RobbinHood ransomware as example.

Case study: Secured-core PCs vs. RobbinHood ransomware

RobbinHood ransomware is distributed as a packed executable that contains multiple binaries. One of these files is a Gigabyte driver (GDRV.sys), which has a vulnerability that  could allow elevation of privilege, enabling an adversary to gain kernel privileges. In RobbinHood campaigns, adversaries use these kernel privileges to disable kernel-mode signing to facilitate the loading of an unsigned driver. The unsigned malicious driver is then used to disable security products from the kernel.

RobbinHood is not an isolated threat leveraging a vulnerable driver to achieve elevation of privilege. In the last two years, the Microsoft Defender ATP Research Team has seen a rise in the use of vulnerable drivers by adversaries, ranging from commodity malware to nation-state level attacks. In addition to vulnerable drivers, there are also drivers that are vulnerable by design (also referred to as “wormhole drivers”), which can break the security promise of the platform by opening up direct access to kernel-level arbitrary memory read/write, MSRs.

In our research, we identified over 50 vendors that have published many such wormhole drivers. We actively work with these vendors and determine an action plan to remediate these drivers. In order to further help customers identify these drivers and take necessary measures, we built an automated way in which we can block vulnerable drivers, and that is updated through Windows update. Customers can also manage their own blocklist as outlined in the sections below.

Preventive defenses

Two of the security promises of Secured-core PCs are directly applicable to preventing RobbinHood attacks:

  • Defending against vulnerable and malicious drivers
  • Defending against unverified code execution

Defending against vulnerable and malicious drivers

Secured-core PCs are the latest hardware to provide driver control out of the box, with baseline configuration already set. Driver control is provided by a combination of HVCI & Windows Defender Application Control (WDAC) technologies.

Every driver loaded into the kernel is verified by HVCI before it’s allowed to run. HVCI runs in a hardware-protected execution environment isolated from the kernel space and cannot be tampered with by other code running in the kernel, including drivers.

Driver control uses HVCI & WDAC technologies to perform the following operations:

  1. Validity and memory integrity enforcement at load-time and runtime

HVCI uses hardware-based virtualization and the hypervisor (the same hypervisor also used in Azure) to protect Windows kernel mode processes from injection and execution of malicious or unverified code. The integrity of code that runs in the Windows kernel is validated by HVCI according to the kernel signing policy applied to the device. Additionally, kernel memory pages are never simultaneously writable and executable. This makes Secured-core PCs highly resistant to malicious software attempting to gain code execution in the kernel.

In the case of GDRV.sys, which is the driver used by the RobbinHood malware, if the vulnerable driver is successfully loaded and then exploited, the runtime memory integrity check would protect the critical components. Thus, an attack to change ci!g_CiOptions and nt!g_CiEnabled, would be ineffective, as the kernel ignores changes to the variables coming from the general kernel space. And, as code integrity is enabled by default, the malicious driver RBNL.sys wouldn’t load.

The image below shows an event log from a Secured-core PC showing runtime memory integrity check preventing the CI options from being tampered with by RobbinHood and, subsequently, preventing the malicious driver RBNL.sys from being loaded.

Because runtime memory integrity check is enabled by default on Secured-core PCs, RobbinHood wouldn’t be able to disable code integrity on these machines.

  1. Blocklist check

While the most ideal scenario is for enterprises to set customer-specific allows lists, it can be a complex undertaking. To help customers, HVCI uses a blocklist of drivers that are blocked from loading. This blocklist is supplied in two ways:

    • Microsoft-supplied blocklist

Microsoft threat research teams continuously monitor the threat ecosystem and update the list of drivers that in the Microsoft-supplied blocklist. This blocklist is pushed down to devices via Windows update.

We’ve heard from customers that they’d like to provide a list of drivers that should be on the generic Microsoft-supplied blocklist. We’re working on a new feature that allow customers to submit drivers that they’d like us to review and add to the Microsoft-supplied blocklist.

    • Customer-specific blocklist

We recognize that there are situations where customers want a blocklist specific to their organization. By default, any validly signed driver is accepted, but customers can choose to reduce the list of accepted drivers by choosing only WHQL signed drivers. These are drivers that are submitted to Microsoft for signing and are run through a number of tests before being signed.

Devices can apply a custom code integrity policy that customers can use to define their own specific blocklist. This article has more information on how to create such a customer specific blocklist. Below is an example of a customer-specific blocklist that blocks the vulnerable driver GDRV.sys.

Defending against unverified code execution and kernel data corruption attacks

There are several unverified code execution mitigations built-in to Windows. These are readily available on Secured-core PCs.

The RobbinHood attack utilized the vulnerable GDRV.sys driver to change a crucial variable within the system memory. Although HVCI already protects against the attack on g_CiOptions, other areas of memory may still be susceptible, and we need broader defense against kernel data corruption attacks.

In addition to existing mitigations, Windows is introducing a new feature called Kernel Data Protection (KDP), which provides driver developers and software running in the Windows kernel (and the OS code itself) with the ability to mark some kernel memory containing sensitive information as read-only protected. The memory is protected through the second level address translation (SLAT) tables by the hypervisor, such that no software running in VTL0 have access to the protected memory. KDP does not protect executable pages, as those are already protected with HVCI.

Many kernel components have data that is set only once during boot and remains unchanged for the rest of the boot cycle. The first release of KDP protects the static data sections of a driver. In the future, we’re also planning to provide APIs to dynamically allocate and release protected initialized pool memory.

Secured-core PCs have KDP enabled by default.

Detection defenses

As observed in RobbinHood attacks, once the threat gains kernel-level privilege, the threat turns off system defenses, including the endpoint protection agent. Secured-core PCs provide a monitoring agent that utilizes virtualization-based security and runs in this protected environment.

The monitoring agent performs several functions. The ones relevant for this case study are:

  • Secure anti-tampering for security agents
  • Secure monitoring of Windows

Secure anti-tampering for security agents

This monitoring agent watches for attempts to tamper with the security agents. For Microsoft Defender ATP customers, these are integrated into alerts that are surfaced in Microsoft Defender Security Center.

Secure monitoring of Windows

The agent also monitors several areas of Windows, including checking for kernel exploit behavior that are often used to elevate privileges. In this particular case, the monitoring agent detected a token tampering assertion.

Secured-core PCs have both VBS and this secure monitoring agent turned on by default.


As this case study demonstrates, more and more threats are becoming so advanced that they can bypass software-only based defenses. Secured-core PCs are protected from RobbinHood and similar threats by default.

Customers can also get similar protection on traditional devices as long as they have the necessary hardware and are configured correctly. Specifically, the following features need to be enabled: Secure boot, HVCI (enables VBS), KDP (automatically turned on when VBS is on), KDMA (Thunderbolt only) and Windows Defender System Guard.

With Secured-core PCs, however, customers get a seamless chip to cloud security pattern that starts from a strong hardware root of trust and works with cloud services and Microsoft Defender ATP to aggregate and normalize the alerts from hardware elements to provide end-to-end endpoint security.

Overall improved endpoint protection accrues to the broader Microsoft Threat Protection, which combines and orchestrates into a single solutions the capabilities of Microsoft Defender ATP, Office 365 ATP, Azure ATP, and Microsoft Cloud App Security to provide comprehensive, cross-domain protection for endpoints, email and data, identities, and apps.


The post Secured-core PCs: A brief showcase of chip-to-cloud security against kernel attacks appeared first on Microsoft Security.