Archive

Archive for the ‘SQL Server’ Category

Virtualization-based security (VBS) memory enclaves: Data protection through isolation

The escalating sophistication of cyberattacks is marked by the increased use of kernel-level exploits that attempt to run malware with the highest privileges and evade security solutions and software sandboxes. Kernel exploits famously gave the WannaCry and Petya ransomware remote code execution capability, resulting in widescale global outbreaks.

Windows 10 remained resilient to these attacks, with Microsoft constantly raising the bar in platform security to stay ahead of threat actors. Virtualization-based security (VBS) hardens Windows 10 against attacks by using the Windows hypervisor to create an environment that isolates a secure region of memory known as secure memory enclaves.

Figure 1. VBS secure memory enclaves

An enclave is an isolated region of memory within the address space of a user-mode process. This region of memory is controlled entirely by the Windows hypervisor. The hypervisor creates a logical separation between the normal world and secure world, designated by Virtual Trust Levels, VTL0 and VT1, respectively. VBS secure memory enclaves create a means for secure, attestable computation in an otherwise untrusted environment.

VBS enclaves in Microsoft SQL Server

A key technology that will leverage VBS secure memory enclaves is Microsoft SQL Server. The upcoming SQL Server secure enclave feature ensures that sensitive data stored in an SQL Server database is only decrypted and processed inside an enclave. SQL Servers use of secure enclaves allows the processing of sensitive data without exposing the data to database administrators or malware. This reduces the risk of unauthorized access and achieves separation between those who own the data (and can view it) and those who manage the data (but should have no access). To learn more about the use of secure enclaves in SQL Server, see the blog post Enabling confidential computing with Always Encrypted using enclaves.

Data protection

One of the major benefits of secure memory enclaves is data protection. Data resident in an enclave is only accessible by code running inside that enclave. This means that there is a security boundary between VTL0 and VTL1. If a process tries to read memory that is within the secure memory enclave, an invalid access exception is thrown. This happens even when a kernel-mode debugger is attached to the normal process the debugger will fail when trying to step into the enclave.

Code integrity

Code integrity is another major benefit provided by enclaves. Code loaded into an enclave is securely signed with a key; therefore, guarantees can be made about the integrity of code running within a secure memory enclave. The code running inside an enclave is incredibly restricted, but a secure memory enclave can still perform meaningful work. This includes performing computations on data that is encrypted outside the enclave but can be decrypted and evaluated in plaintext inside the enclave, without exposing the plaintext to anything other than the enclave itself. A great example of why this is useful in a multi-tenant cloud computing scenario is described in the Azure confidential computing blog post. This move allowed us to continually make significant innovations in platform security.

Attestation

Attestation is also a critical aspect of secure memory enclaves. Sensitive information, such as plaintext data or encryption keys, must only be sent to the intended enclave that must be trusted. VBS enclaves can be put into debug mode for testing but lose memory isolation. This is great for testing, but in production this impacts the security guarantees of the enclave. To ensure that a production secure enclave is never in debug mode, an attestation report is generated to state what mode the enclave is in (among various other configuration and identity parameters). This report is then verified by a trust relationship between the consumer and producer of the report.

To establish this trust, VBS enclaves can expose an enclave attestation report that is fully signed by the VBS-unique key. This can prove the relationship between the enclave and host, as well as the exact configuration of the enclave. This attestation report can be used to establish a secure channel of communication between two enclaves. In Windows this is possible simply by exchanging the report. For remote scenarios, an attestation service can use this report to establish a trust relationship between a remote enclave and a client application.

One feature that relies on secure memory enclave attestation is Windows Defender System Guard runtime attestation, which allows users to measure and attest to all interactions from the enclave to other capabilities, including areas of runtime and boot integrity.

Figure 2. Windows Defender System Guard runtime attestation

Elevating data security

There are many secure memory enclave technologies in the industry today. Each have pros and cons in capabilities. The benefit of using a VBS secure memory enclave is that there are no special hardware requirements, only that the processor supports hypervisor virtualization extensions:

Additionally, VBS enclaves do not have the same memory constraints as a hardware-based enclave, which are usually quite limited.

VBS secure memory enclaves provide hardware-rooted virtualization-based data protection and code integrity. They are leveraged for new data security capabilities, as demonstrated by Azure confidential computing and the Always Encrypted feature of Microsoft SQL Server. These are examples of the rapid innovation happening all throughout Microsoft to elevate security. This isnt the last youll hear of secure memory enclaves. As Microsoft security technologies continue to advance, we can expect secure memory enclaves to stand out in many more protection scenarios.

 

 

Maxwell Renke, Program manager, Windows

Chris Riggs, Principal Program Manager, Microsoft Offensive Security Research

 

BI-lösning i SQL Server ger Boxer lojala kunder

Konkurrensen på TV-marknaden ökar ständigt. Bara under de senaste två åren har den skruvats upp av nya film- och playtjänster på internet. Det är viktigt att få nya kunder, men kanske ännu viktigare att behålla de man redan har. För att möta konkurrensen förbättrar Boxer TV-Access sin kundkommunikation med hjälp av artificiell intelligens och en beslutsstödslösning byggd av Random Forest på Microsoft SQL Server Enterprise 2012.

 

– Den här lösningen har blivit en stor succé i företaget. Den hjälper oss att komma med rätt erbjudanden till rätt kunder i rätt tid. Därför har vi blivit betydligt bättre på att behålla våra kunder, säger Martin Carlsson, CIO på Boxer.

 

Tekniken där man tillämpar artificiell intelligens på affärsinformation kallas Data Mining. Genom att statistiskt kombinera och analysera olika typer av kundinformation kan sannolikheten för olika beteenden beräknas. Detta kan till exempel baseras på var en kund bor, vad kunden har för abonnemang, hur kunden agerat och agerar just nu tillsammans med tidigare avhoppsstatistik.

 

Denna typ av lösning lär sig automatiskt och förbättrar sig själv över tiden, och lösningen kan ofta byggas på en plattform som de flesta bolag redan har.

 

– Random Forest  har varit kreativa och nytänkande i skapandet av Boxers nya BI-lösning och använt sig av prediktiv analys. Det är sådant som verkligen gör skillnad för kunderna, säger Tommy Flink, Marknadsansvarig Beslutsstöd på Microsoft Sverige.

 

– Vi har sedan länge valt att satsa på Microsoft-miljö. Därför var det naturligt för oss att använda Microsoft-produkter när vi skapade den här lösningen, säger Martin Carlsson.

 

– För företag som har tillgång till en BI-lösning som baseras på Microsofts verktyg blir det betydligt enklare och mycket mindre kostsamt än om de använder externa analysverktyg. Tekniken för att kunna skapa en lösning som Boxers finns redan på plats, säger Gustav Rengby, områdesansvarig för kundanalys på Random Forest, konsultbolaget som hjälper Boxer att ta fram lösningen.

 

Bild: Tommy Flink, Marknadsansvarig Beslutsstöd på Microsoft Sverige.

Categories: beslutsstöd, BI, Boxer, Random Forest, SQL Server Tags:

FEP data collection job fails periodically

January 24th, 2011 Comments off

We wanted to update you about an issue with FEP that you may have seen in your organization. This is a known issue, and we’ll keep you up to date with developments.

Symptoms:

Periodically, the FEP data collection job (FEP_GetNewData_FEPDW_xyz) fails. When the job fails, the FEP Health Management Pack for Operations Manager and the FEP BPA report an error with the FEP datawarehouse job either failing or not running. The failure is in one of the following job steps:

  • Step 6: End raise error section on DW, raise errors that were thrown from DW DB
  • Step 7: ssisFEP_GetErrorsDuringUpload_FEPDW_xyz

Cause:

This happens because of the following scenario:

  1. The antimalware client is from time to time sending a malformed malware detection data item to the FEP server.
  2. The server tries to process this data item as part of the data collection job (FEP_GetNewData_FEPDW_xyz).
  3. During data item processing, the job sees that this data item is malformed and ignores it.
  4. After processing completes, the data collection job (FEP_GetNewData_FEPDW_xyz) looks to see if any data items were malformed, and if so, it fails the job.

Impact:

  • Malformed data items are lost (they don’t get processed); all properly-formed data items are processed.
  • You may experience a small performance impact during the data collection job (FEP_GetNewData_FEPDW_xyz) due to the handling of malformed data items.
  • The data collection job (FEP_GetNewData_FEPDW_xyz) appears as failed in the job history.
  • If the SQL Server Monitoring Management Pack is installed on your Operations Manager server, the data collection job (FEP_GetNewData_FEPDW_xyz) appears with an error.
  • If the Forefront Endpoint Protection Server Health Monitoring Management Pack is installed on your Operations Manager server, the FEP deployment appears as critical and an alert is issued.

FEP Capacity Planning Worksheet

January 19th, 2011 Comments off

Greetings!

Attached to this blog post is the FEP Datawarehouse Space Capacity Planning worksheet. You can use this worksheet to help estimate the amount of disk space needed based on the following values:

  • Number of client computers in your FEP 2010 deployment
  • The number of days to retain data (the retention period)
  • The average number of Configuration Manager collections to which each client computer belongs
  • The average number of detections per client computer, per day

After you enter in your values in the yellow area, the calculated results appear in the next set of rows. Each row contains information about average record sizes, number of records per computer per day, total size of the record type in the database, and the percent of the total space used by the record item.

The final row in the spreadsheet, in green, gives you the total estimated size of the FEP Datawarehouse, given the values you supplied.

Enjoy!