Archive

Archive for the ‘security’ Category

Microsoft Joins Open Source Security Foundation

August 3rd, 2020 No comments

Microsoft has invested in the security of open source software for many years and today I’m excited to share that Microsoft is joining industry partners to create the Open Source Security Foundation (OpenSSF), a new cross-industry collaboration hosted at the Linux Foundation. The OpenSSF brings together work from the Linux Foundation-initiated Core Infrastructure Initiative (CII), …

Microsoft Joins Open Source Security Foundation Read More »

The post Microsoft Joins Open Source Security Foundation appeared first on Microsoft Security Response Center.

Categories: Linux, Open Source, OpenSSF, security Tags:

Updates to the Windows Insider Preview Bounty Program

July 24th, 2020 No comments

Partnering with the research community is an important part of Microsoft’s holistic approach to defending against security threats. Bounty programs are one part of this partnership, designed to encourage and reward vulnerability research focused on the highest impact to customer security. The Windows Insider Preview (WIP) Bounty Program is a key program for Microsoft and …

Updates to the Windows Insider Preview Bounty Program Read More »

The post Updates to the Windows Insider Preview Bounty Program appeared first on Microsoft Security Response Center.

Seeing the big picture: Deep learning-based fusion of behavior signals for threat detection

July 23rd, 2020 No comments

The application of deep learning and other machine learning methods to threat detection on endpoints, email and docs, apps, and identities drives a significant piece of the coordinated defense delivered by Microsoft Threat Protection. Within each domain as well as across domains, machine learning plays a critical role in analyzing and correlating massive amounts of data to detect increasingly evasive threats and build a complete picture of attacks.

On endpoints, Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) detects malware and malicious activities using various types of signals that span endpoint and network behaviors. Signals are aggregated and processed by heuristics and machine learning models in the cloud. In many cases, the detection of a particular type of behavior, such as registry modification or a PowerShell command, by a single heuristic or machine learning model is sufficient to create an alert.

Detecting more sophisticated threats and malicious behaviors considers a broader view and is significantly enhanced by fusion of signals occurring at different times. For example, an isolated event of file creation is generally not a very good indication of malicious activity, but when augmented with an observation that a scheduled task is created with the same dropped file, and combined with other signals, the file creation event becomes a significant indicator of malicious activity. To build a layer for these kinds of abstractions, Microsoft researchers instrumented new types of signals that aggregate individual signals and create behavior-based detections that can expose more advanced malicious behavior.

In this blog, we describe an application of deep learning, a category of machine learning algorithms, to the fusion of various behavior detections into a decision-making model. Since its deployment, this deep learning model has contributed to the detection of many sophisticated attacks and malware campaigns. As an example, the model uncovered a new variant of the Bondat worm that attempts to turn affected machines into zombies for a botnet. Bondat is known for using its network of zombie machines to hack websites or even perform cryptocurrency mining. This new version spreads using USB devices and then, once on a machine, achieves a fileless persistence. We share more technical details about this attack in latter sections, but first we describe the detection technology that caught it.

Powerful, high-precision classification model for wide-ranging data

Identifying and detecting malicious activities within massive amounts of data processed by Microsoft Defender ATP require smart automation methods and AI. Machine learning classifiers digest large volumes of historical data and apply automatically extracted insights to score each new data point as malicious or benign. Machine learning-based models may look at, for example, registry activity and produce a probability score, which indicates the probability of the registry write being associated with malicious activity. To tie everything together, behaviors are structured into virtual process trees, and all signals associated with each process tree are aggregated and used for detecting malicious activity.

With virtual process trees and signals of different types associated to these trees, there’s still large amounts of data and noisy signals to sift through. Since each signal occurs in the context of a process tree, it’s necessary to fuse these signals in the chronological order of execution within the process tree. Data ordered this way requires a powerful model to classify malicious vs. benign trees.

Our solution comprises several deep learning building blocks such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN). The neural network can take behavior signals that occur chronologically in the process tree and treat each batch of signals as a sequence of events. These sequences can be collected and classified by the neural network with high precision and detection coverage.

Behavior-based and machine learning-based signals

Microsoft Defender ATP researchers instrument a wide range of behavior-based signals. For example, a signal can be for creating an entry in the following registry key:

HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run

A folder and executable file name added to this location automatically runs after the machine starts. This generates persistence on the machine and hence can be considered an indicator of compromise (IoC). Nevertheless, this IoC is generally not enough to generate detection because legitimate programs also use this mechanism.

Another example of behavior-based signal is service start activity. A program that starts a service through the command line using legitimate tools like net.exe is not considered a suspicious activity. However, starting a service created earlier by the same process tree to obtain persistence is an IoC.

On the other hand, machine learning-based models look at and produce signals on different pivots of a possible attack vector. For example, a machine learning model trained on historical data to discern between benign and malicious command lines will produce a score for each processed command line.

Consider the following command line:

 cmd /c taskkill /f /im someprocess.exe

This line implies that taskill.exe is evoked by cmd.exe to terminate a process with a particular name. While the command itself is not necessarily malicious, the machine learning model may be able to recognize suspicious patterns in the name of the process being terminated, and provide a maliciousness probability, which is aggregated with other signals in the process tree. The result is a sequence of events during a certain period of time for each virtual process tree.

The next step is to use a machine learning model to classify this sequence of events.

Data modeling

The sequences of events described in the previous sections can be represented in several different ways to then be fed into machine learning models.

The first and simple way is to construct a “dictionary” of all possible events, and to assign a unique identifier (index) to each event in the dictionary. This way, a sequence of events is represented by a vector, where each slot constitutes the number of occurrences (or other related measure) for an event type in the sequence.

For example, if all possible events in the system are X,Y, and Z, a sequence of events “X,Z,X,X” is represented by the vector [3, 0, 1], implying that it contains three events of type X, no events of type Y, and a single event of type Z. This representation scheme, widely known as “bag-of-words”,  is suitable for traditional machine learning models and has been used for a long time by machine learning practitioners. A limitation of the bag-of-words representation is that any information about the order of events in the sequence is lost.

The second representation scheme is chronological. Figure 1 shows a typical process tree: Process A raises an event X at time t1, Process B raises an event Z at time t2, D raises X at time t3, and E raises X at time t4. Now the entire sequence “X,Z,X,X”  (or [1,3,1,1] replacing events by their dictionary indices) is given to the machine learning model.

Diagram showing process tree

Figure 1. Sample process tree

In threat detection, the order of occurrence of different events is important information for the accurate detection of malicious activity. Therefore, it’s desirable to employ a representation scheme that preserves the order of events, as well as machine learning models that are capable of consuming such ordered data. This capability can be found in the deep learning models described in the next section.

Deep CNN-BiLSTM

Deep learning has shown great promise in sequential tasks in natural language processing like sentiment analysis and speech recognition. Microsoft Defender ATP uses deep learning for detecting various attacker techniques, including malicious PowerShell.

For the classification of signal sequences, we use a Deep Neural Network that combines two types of building blocks (layers): Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory Recurrent Neural Networks (BiLSTM-RNN).

CNNs are used in many tasks relating to spatial inputs such as images, audio, and natural language. A key property of CNNs is the ability to compress a wide-field view of the input into high-level features.  When using CNNs in image classification, high-level features mean parts of or entire objects that the network can recognize. In our use case, we want to model long sequences of signals within the process tree to create high-level and localized features for the next layer of the network. These features could represent sequences of signals that appear together within the data, for example, create and run a file, or save a file and create a registry entry to run the file the next time the machine starts. Features created by the CNN layers are easier to digest for the ensuing LSTM layer because of this compression and featurization.

LSTM deep learning layers are famous for results in sentence classification, translation, speech recognition, sentiment analysis, and other sequence modeling tasks. Bidirectional LSTM combine two layers of LSTMs that process the sequence in opposite directions.

The combination of the two types of neural networks stacked one on top of the other has shown to be very effective and can classify long sequences of hundreds of items and more. The final model is a combination of several layers: one embedding layer, two CNNs, and a single BiLSTM. The input to this model is a sequence of hundreds of integers representing the signals associated with a single process tree during a unit of time. Figure 2 shows the architecture of our model.

Diagram showing layers of the CNN BiLSTM model

Figure 2. CNN-BiLSTM model

Since the number of possible signals in the system is very high, input sequences are passed through an embedding layer that compresses high-dimensional inputs into low-dimensional vectors that can be processed by the network. In addition, similar signals get a similar vector in lower dimensional space, which helps with the final classification.

Initial layers of the network create increasingly high-level features, and the final layer performs sequence classification. The output of the final layer is a score between 0 and 1 that indicates the probability of the sequence of signals being malicious. This score is used in combination with other models to predict if the process tree is malicious.

Catching real-world threats

Microsoft Defender ATP’s endpoint detection and response capabilities use this Deep CNN-BiLSTM model to catch and raise alerts on real-world threats. As mentioned, one notable attack that this model uncovered is a new variant of the Bondat worm, which was seen propagating in several organizations through USB devices.

Diagram showing the Bondat attack chain

Figure 3. Bondat malware attack chain

Even with an arguably inefficient propagation method, the malware could persist in an organization as users continue to use infected USB devices. For example, the malware was observed in hundreds of machines in one organization. Although we detected the attack during the infection period, it continued spreading until all malicious USB drives were collected. Figure 4 shows the infection timeline.

Column chart showing daily encounters of the Bondat malware in one organization

Figure 4. Timeline of encounters within a single organization within a period of 5 months showing reinfection through USB devices

The attack drops a JavaScript payload, which it runs directly in memory using wscript.exe. The JavaScript payload uses a randomly generated filename as a way to evade detections. However, Antimalware Scan Interface (AMSI) exposes malicious script behaviors.

To spread via USB devices, the malware leverages WMI to query the machine’s disks by calling “SELECT * FROM Win32_DiskDrive”. When it finds a match for “/usb” (see Figure 5), it copies the JavaScript payload to the USB device and creates a batch file on the USB device’s root folder. The said batch file contains the execution command for the payload. As part of its social engineering technique to trick users into running the malware in the removable device, it creates a LNK file on the USB pointing to the batch file.

Screenshot of malware code showing infection technique

Figure 5. Infection technique

The malware terminates processes related to antivirus software or debugging tools. For Microsoft Defender ATP customers, tamper protection prevents the malware from doing this. Notably, after terminating a process, the malware pops up a window that imitates a Windows error message to make it appear like the process crashed (See figure 6).

Screenshot of malware code showing infection technique

Figure 6. Evasion technique

The malware communicates with a remote command-and-control (C2) server by implementing a web client (MSXML). Each request is encrypted with RC4 using a randomly generated key, which is sent within the “PHPSESSID” cookie value to allow attackers to decrypt the payload within the POST body.

Every request sends information about the machine and its state following the output of the previously executed command. The response is saved to disk and then parsed to extract commands within an HTML comment tag. The first five characters from the payload are used as key to decrypt the data, and the commands are executed using the eval() method. Figures 7 and 8 show the C2 communication and HTML comment eval technique.

Once the command is parsed and evaluated by the JavaScript engine, any code can be executed on an affected machine, for example, download other payloads, steal sensitive info, and exfiltrate stolen data. For this Bondat campaign, the malware runs coin mining or coordinated distributed denial of service (DDoS) attacks.

Figure 7. C2 communication

Figure 8. Eval technique (parsing commands from html comment)

The malware’s activities triggered several signals throughout the attack chain. The deep learning model inspected these signals and the sequence with which they occurred, and determined that the process tree was malicious, raising an alert:

  1. Persistence – The malware copies itself into the Startup folder and drops a .lnk file pointing to the malware copy that opens when the computer starts
  2. Renaming a known operating system tool – The malware renames exe into a random filename
  3. Dropping a file with the same filename as legitimate tools – The malware impersonates legitimate system tools by dropping a file with a similar name to a known tool.
  4. Suspicious command line – The malware tries to delete itself from previous location using a command line executed by a process spawned by exe
  5. Suspicious script content – Obfuscated JavaScript payload used to hide the attacker’s intentions
  6. Suspicious network communication – The malware connects to the domain legitville[.]com

Conclusion

Modeling a process tree, given different signals that happen at different times, is a complex task. It requires powerful models that can remember long sequences and still be able to generalize well enough to churn out high-quality detections. The Deep CNN-BiLSTM model we discussed in this blog is a powerful technology that helps Microsoft Defender ATP achieve this task. Today, this deep learning-based solution contributes to Microsoft Defender ATP’s capability to detect evolving threats like Bondat.

Microsoft Defender ATP raises alerts for these deep learning-driven detections, enabling security operations teams to respond to attacks using Microsoft Defender ATP’s other capabilities, like threat and vulnerability management, attack surface reduction, next-generation protection, automated investigation and response, and Microsoft Threat Experts. Notably, these alerts inform behavioral blocking and containment capabilities, which add another layer of protection by blocking threats if they somehow manage to start running on machines.

The impact of deep learning-based protections on endpoints accrues to the broader Microsoft Threat Protection (MTP), which combines endpoint signals with threat data from email and docs, identities, and apps to provide cross-domain visibility. MTP harnesses the power of Microsoft 365 security products to deliver unparalleled coordinated defense that detects, blocks, remediates, and prevents attacks across an organization’s Microsoft 365 environment. Through machine learning and AI technologies like the deep-learning model we discussed in this blog, MTP automatically analyzes cross-domain data to build a complete picture of each attack, eliminating the need for security operations centers (SOC) to manually build and track the end-to-end attack chain and relevant details. MTP correlates and consolidates attack evidence into incidents, so SOCs can save time and focus on critical tasks like expanding investigations and proacting threat hunting.

 

Arie Agranonik, Shay Kels, Guy Arazi

Microsoft Defender ATP Research Team

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Threat Protection and Microsoft Defender ATP tech communities.

Read all Microsoft security intelligence blog posts.

Follow us on Twitter @MsftSecIntel.

The post Seeing the big picture: Deep learning-based fusion of behavior signals for threat detection appeared first on Microsoft Security.

The science behind Microsoft Threat Protection: Attack modeling for finding and stopping evasive ransomware

June 10th, 2020 No comments

The linchpin of successful cyberattacks, exemplified by nation state-level attacks and human-operated ransomware, is their ability to find the path of least resistance and progressively move across a compromised network. Determining the full scope and impact of these attacks is one the most critical, but often most challenging, parts of security operations.

To provide security teams with the visibility and solutions to fight cyberattacks, Microsoft Threat Protection (MTP) correlates threat signals across multiple domains and point solutions, including endpoints, identities, data, and applications. This comprehensive visibility allows MTP to coordinate prevention, detection, and response across your Microsoft 365 data.

One of the many ways that MTP delivers on this promise is by providing high-quality consolidation of attack evidence through the concept of incidents. Incidents combine related alerts and attack behaviors within an enterprise. An example of an incident is the consolidation of all behaviors indicating ransomware is present on multiple machines, and connecting lateral movement behavior with initial access via brute force. Another example can be found in the latest MITRE ATT&CK evaluation, where Microsoft Threat Protection automatically correlated 80 distinct alerts into two incidents that mirrored the two attack simulations.

The incident view helps empower defenders to quickly understand and respond to the end-to-end scope of real-world attacks. In this blog we will share details about a data-driven approach for identifying and augmenting incidents with behavioral evidence of lateral movement detected through statistical modeling. This novel approach, an intersection of data science and security expertise, is validated and leveraged by our own Microsoft Threat Experts in identifying and understanding the scope of attacks.

Identifying lateral movement

Attackers move laterally to escalate privileges or to steal information from specific machines in a compromised network. Lateral movement typically involves adversaries attempting to co-opt legitimate management and business operation capabilities, including applications such as Server Message Block (SMB), Windows Management Instrumentation (WMI), Windows Remote Management (WinRM), and Remote Desktop Protocol (RDP). Attackers target these technologies that have legitimate uses in maintaining functionality of a network because they provide ample opportunities to blend in with large volumes of expected telemetry and provide paths to their objectives. More recently, we have observed attackers performing lateral movement, and then using the aforementioned WMI or SMB to deploy ransomware or data-wiping malware to multiple target machines in the network.

A recent attack from the PARINACOTA group, known for human-operated attacks that deploy the Wadhrama ransomware, is notable for its use of multiple methods for lateral movement. After gaining initial access to an internet-facing server via RDP brute force, the attackers searched for additional vulnerable machines in the network by scanning on ports 3389 (RDP), 445 (SMB), and 22 (SSH).

The adversaries downloaded and used Hydra to brute force targets via SMB and SSH. In addition, they used credentials that they stole through credential dumping using Mimikatz to sign into multiple other server machines via Remote Desktop. On all additional machines they were able to access, the attackers performed mainly the same activities, dumping credentials and searching for valuable information.

Notably, the attackers were particularly interested in a server that did not have Remote Desktop enabled. They used WMI in conjunction with PsExec to allow remote desktop connections on the server and then used netsh to disable blocking on port 3389 in the firewall. This allowed the attackers to connect to the server via RDP.

They eventually used this server to deploy ransomware to a huge portion of the organization’s server machine infrastructure. The attack, an example of a human-operated ransomware campaign, crippled much of the organization’s functionality, demonstrating that detecting and mitigating lateral movement is critical.

PARINACOTA ransomware attack chain

Figure 1. PARINACOTA attack with multiple lateral movement methods

A probabilistic approach for inferring lateral movement

Automatically correlating alerts and evidence of lateral movement into distinct incidents requires understanding the full scope of an attack and establishing the links of an attacker’s activities that show movement across a network. Distinguishing malicious attacker activities among the noise of legitimate logons in complex networks can be challenging and time-consuming. Failing to get an aggregated view of all related alerts, assets, investigations, and evidence may limit the action that defenders take to mitigate and fully resolve an attack.

Microsoft Threat Protection uses its unique cross-domain visibility and built-in automation powered to detect lateral movement The data-driven approach to detecting lateral movement involves understanding and statistically quantifying behaviors that are observed to a part of one attack chain, for example, credential theft followed by remote connections to other devices and further unexpected or malicious activity.

Dynamic probability models, which are capable of self-learning over time using new information, quantify the likelihood of observing lateral movement given relevant signals. These signals can include the frequency of network connections between endpoints over certain ports, suspicious dropped files, and types of processes that are executed on endpoints. Multiple behavioral models encode different facets of an attack chain by correlating specific behaviors associated with attacks. These models, in combination with anomaly detection, drive the discovery of both known and unknown attacks.

Evidence of lateral movement can be modeled using a graph-based approach, which involves constructing appropriate nodes and edges in the right timeline. Figure 2 depicts a graphical representation of how an attacker might laterally move through a network. The objective of graphing an attack is to discover related subgraphs with high enough confidence to surface for immediate further investigation. Building behavioral models that can accurately compute probabilities of attacks is key to ensuring that confidence is correctly measured and all related events are combined.

Visualization of network with an attacker moving laterally

Figure 2. Visualization of network with an attacker moving laterally (combining incidents 1, 2, 4, 5)

Figure 3 outlines the steps involved for modeling lateral movement and encoding behaviors that are later referenced for augmenting incidents. Through advanced hunting, examples of lateral movement are surfaced, and real attack behaviors are analyzed. Signals are then formed by aggregating telemetry, and behavioral models are defined and computed.

Diagram showing steps for specifying statistical models for detecting lateral movement

Figure 3. Specifying statistical models to detect lateral movement encoding behaviors

Behavioral models are carefully designed by statisticians and threat experts working together to combine best practices from probabilistic reasoning and security, and to precisely reflect the attacker landscape.

With behavioral models specified, the process for incident augmentation proceeds by applying fuzzy mapping to respective behaviors, followed by estimating the likelihood of an attack. For example, if there’s sufficient confidence that the relative likelihood of an attack is higher, including the lateral movement behaviors, then the events are linked. Figure 4 shows the flow of this logic. We have demonstrated that the combination of this modeling with a feedback loop based on expert knowledge and real-world examples accurately discovers attack chains.

Diagram showing steps of algorithm for augmenting incidents using graph inference

Figure 4. Flow of incident augmentation algorithm based on graph inference

Chaining together the flow of this logic in a graph exposes attacks as they traverse a network. Figure 5 shows, for instance, how alerts can be leveraged as nodes and DCOM traffic (TCP port 135) as edges to identify lateral movement across machines. The alerts on these machines can then be fused together into a single incident. Visualizing these edges and nodes in a graph shows how a single compromised machine could allow an attacker to move laterally to three machines, one of which was then used for even further lateral movement.

Diagram showing relevant alerts as an attack move laterally from one machine to other machines

Figure 5. Correlating attacks as they pivot through machines

Augmenting incidents with lateral movement intel

The PARINACOTA attack we described earlier is a human-operated ransomware campaign that involved compromising six newly onboarded servers. Microsoft Threat Protection automatically correlated the following events into an incident that showed the end-to-end attack chain:

  • A behavioral model identified RDP inbound brute force attempts that started a few days before the ransomware was deployed, as depicted in Figure 6.
  • When the initial compromise was detected, the brute force attempts were automatically identified as the cause of the breach.
  • Following the breach, attackers dropped multiple suspicious files on the compromised server and proceeded to move laterally to multiple other servers and deploy the ransomware payload. This attack chain raised 16 distinct alerts that Microsoft Threat Protection, applying the probabilistic reasoning method, correlated into the same incident indicating the spread of ransomware, as illustrated in Figure 7.

Graph showing increased daily inbound RDP traffic

Figure 6. Indicator of brute force attack based on time series count of daily inbound public IP

Diagram showing ransomware being deployed after an attacker has moved laterally

Figure 7. Representation of post breach and ransomware spreading from initial compromised server

Another area where constructing graphs is particularly useful is when attacks originate from unknown devices. These unknown devices can be misconfigured machines, rogue devices, or even IoT devices within a network. Even when there’s no robust telemetry from devices, they can still be used as linking points for correlating activity across multiple monitored devices.

In one example, as demonstrated in figure 8, we saw lateral movement from an unmonitored device via SMB to a monitored device. That device then established a connection back to a command-and-control (C2), set up persistence, and collected a variety of information from the device. Later, the same unmonitored device established an SMB connection to a second monitored device. This time, the only actions the attacker took was to collect information from the device.

The two devices shared a common set of events that were correlated into the same incident:

  • Sign-in from an unknown device via SMB
  • Collecting device information

Diagram showing suspicious traffic from unknown devices

Figure 8: Correlating attacks from unknown devices

Conclusion

Lateral movement is one of the most challenging areas of attack detection because it can be a very subtle signal amidst the normal hum of a large environment. In this blog we described a data-driven approach for identifying lateral movement in enterprise networks, with the goal of driving incident-level discovery of attacks, delivering on the Microsoft Threat Protection (MTP) promise to provide coordinated defense against attacks. This approach works by:

  • Consolidating signals from Microsoft Threat Protection’s unparalleled visibility into endpoints, identities, data, and applications.
  • Forming automated, compound questions of the data to identify evidence of an attack across the data ecosystem.
  • Building subgraphs of lateral movement across devices by modeling attack behavior probabilistically.

This approach combines industry-leading optics, expertise, and data science, resulting in automated discovery of some of the most critical threats in customer environments today. Through Microsoft Threat Protection, organizations can uncover lateral movement in their networks and gain understanding of end-to-end attack chains. Microsoft Threat Protection empowers defenders to automatically stop and resolve attacks, so security operations teams can focus their precious time and resources to more critical tasks, including performing mitigation actions that can remove the ability of attackers to move laterally in the first place, as outlined in some of our recent investigations here and here.

 

 

Justin Carroll, Cole Sodja, Mike Flowers, Joshua Neil, Jonathan Bar Or, Dustin Duran

Microsoft Threat Protection Team

 

The post The science behind Microsoft Threat Protection: Attack modeling for finding and stopping evasive ransomware appeared first on Microsoft Security.

Calling for security research in Azure Sphere, now generally available

February 24th, 2020 No comments

Today, Microsoft released Azure Sphere into General Availability (GA). Azure Sphere’s mission is to empower every organization on the planet to connect and create secured and trustworthy IoT devices. Azure Sphere is an end-to-end solution for securely connecting existing equipment and for creating new IoT devices with built-in security. The solution includes hardware, OS, and …

Calling for security research in Azure Sphere, now generally available Read More »

The post Calling for security research in Azure Sphere, now generally available appeared first on Microsoft Security Response Center.

Announcing the Xbox Bounty program

January 30th, 2020 No comments

Announcing the new Xbox Bounty. The Xbox bounty program invites gamers, security researchers, and technologists around the world to help identify security vulnerabilities in the Xbox network and services, and share them with the Microsoft Xbox team through Coordinated Vulnerability Disclosure (CVD).

The post Announcing the Xbox Bounty program appeared first on Microsoft Security Response Center.

Announcing the Microsoft Identity Research Project Grant

January 9th, 2020 No comments

We are excited to announce the Microsoft Identity Research Project Grant a new opportunity in partnership with the security community to help protect Microsoft customers. This project grant awards up to $75,000 USD for approved research proposals that improve the security of the Microsoft Identity solutions in new ways for both Consumers (Microsoft Account) and Enterprise (Azure Active Directory).

The post Announcing the Microsoft Identity Research Project Grant appeared first on Microsoft Security Response Center.

Microsoft Identity Bounty Improvements

October 23rd, 2019 No comments

Introducing the ElectionGuard Bounty program

October 18th, 2019 No comments

Cybersecurity: a question of trust

This post is authored by Robert Hayes, Senior Director and Chief Security Advisor in Microsoft’s Enterprise Cybersecurity Group.

With the scale, scope, and complexity of cyber-attacks increasing by the week, cybersecurity is increasingly being seen as a primary issue for CEOs & Boards.

Advice is not hard to find, and there are a multitude of information sources and standards; the in-house CIO will have a view, and of course there are a myriad of vendors, each with a solution that promises to be the answer to all security problems.

Trust is at the heart of a successful security strategy, yet knowing who and what can be trusted, and whether that trust should be absolute or conditional, is extremely difficult.

In my conversations with CEOs I often ask them their degree of trust in five key security related areas:

  • The people who work in their organization
  • The organizations in their supply chain
  • The integrity, resilience & security of their existing infrastructure
  • The integrity, resilience & security of cloud based infrastructures
  • The advice they receive, both internal & external

Unsurprisingly, the answer to each question is always varying degree of conditional, but not absolute trust.

Where the conversation becomes interesting, is where the CEO and I then jointly explore whether the infrastructure, processes, and policies of their organization reflect their intent to avoid absolute trust in these five key areas. Invariably, the answer is no.

Recurring examples of this inconsistency, each carrying significant organizational risk, are:

  • IT administrators having unfettered and unaudited access to all corporate systems without effective security mitigations such as multi-factor authentication, and privileged access workstations in place.
  • HR departments not instructing the IT department to cancel user access privileges for days, often weeks, after an employee is terminated or leaves the company.
  • Supply chain contracts drawn up with no security provisions, standards, or audit clauses.
  • No due diligence or impartial advice at Board level on the assurances and assertions made by both in-house IT teams and vendors on integrity, resilience and security.

A common closing theme of these conversations is the need for CEOs and Boards to have impartial advice and support to help them robustly challenge and undertake effective due diligence in this critical area, and the difficulty achieving this.

In the US proposed SEC regulation will mean that companies, in particular publicly listed firms, must have a cyber expert on their Board, yet there are currently very few executive or non-executive directors with this skill set, and who are comfortable operating at a Board level.

An alternative, but expensive position is to buy in the skill set from a third party, and there are many consultancies who will be delighted to have this conversation. However, some consultancies also have a vested interest in system integration, and their advice may not be as impartial as it seems.

Finally, there exists the challenging option of changing the relationship with key suppliers away from the classic customer – vendor to one closer to trusted strategic partner, supported by a robust due-diligence process. Many organizations are seeking to move closer to this type of relationship, whilst still maintaining sufficient distance to satisfy probity and procurement rules.

Whilst each of these options have challenges, the reality remains that without a trusted cybersecurity advisor, CEOs and Boards will continue to make decisions without effective challenge or scrutiny, that leave their organization vulnerable to cyberattack.

To learn more about how Microsoft can help you ensure security while enabling your digital transformation, visit us a Microsoft Secure.

Robert Hayes is a Senior Director and Chief Security Advisor in Microsoft’s Enterprise Cybersecurity Group.

Categories: cybersecurity, SEC, security Tags:

Top Five Security Threats Facing Your Business and How to Respond

This post was authored by Ann Johnson, Vice-President, Enterprise Cybersecurity Group

Headlines highlighting how vulnerable we are to cyber threats are now all too commonplace. The statistics on security events and successful network breaches continue a trend that favors attackers. These bad actors are getting faster at network compromise and data theft while their dwell times inside networks have increased to over 200 days according to most of the major annual cybersecurity reports. The result of these voluminous and persistent threats has been hundreds of millions of dollars in lost business alone without counting the long term costs of diminished customer and citizen confidence.

Still organizations may face even greater risks as they try to fend off sophisticated attackers against a backdrop of an ever expanding network footprint.  The new network now includes myriads of personal devices, virtualized workloads, and sensors that represent rapidly increasing points of connectivity as well potential compromise.

When considering these trends, it is clear that the traditional means of protecting organizations are not as effective as they once were. Static access controls like firewalls and intrusion prevention systems placed at network ingress and egress points are being easily evaded by attackers because the communications paths in and out of networks are too complex and dynamic. Also broad use of personal devices inside corporate networks has dissolved what used to be a hardened network boundary. We no longer conduct business within a perimeter of highly controlled, corporate-issued end user devices that gain access only under the strictest of authentication and authorization controls. Instead, the modern enterprise enables dynamic communities of employees, contractors, business partners and customers as well as their data and applications, all connected by an agile digital fabric that is optimized for sharing and collaboration.

In today’s networks then, we have to consider that identity is the new perimeter to be protected. Identity in this case does not mean only the device and its physical location but also the data, applications and user information it contains. Given that 60% of all breaches still originate at an endpoint compromised through a phishing scam or social engineering attack, it is no wonder that a risk mitigation strategy with identity at its center, is top of mind for many business and technology leaders.

In fact, cyber security is a boardroom level agenda item today. Business leaders want to ensure that they have in place the investments necessary to protect intellectual property and customer data, keeping their businesses out of the headlines that damage reputation and affect profitability. CIOs and CISOs feel caught between seemingly opposing goals of enabling digital transformation while protecting data and intellectual property at all times. These are concerns they share with their teams in IT and operations who feel equally burdened to balance performance and accessibility with rightful and appropriate resource use. Cybersecurity as we have all come to understand, can be either a critical barrier or key enabler to an organization’s ability to be productive. Current top of mind concerns for protecting the modern enterprise coalesce around 5 key areas: infrastructure, SaaS, devices, identity and response.

  1. Infrastructure – The public cloud offers unlimited potential for scaling business. On-demand compute and storage are only a small portion of the benefits of a highly agile IT environment. Easy access to applications, services and development environments promises to redefine business agility. Naturally, more and more organizations are taking critical workloads to the public cloud. Still the migration to an environment that is provisioned and managed by a non-organizational stakeholder creates new security challenges. So the top of mind question is: “How do I secure my cloud resources?”

Going to the cloud does not mean relinquishing security control or accepting a security posture that is less secure for cloud-hosted workloads relative to premised ones. In fact, the selection of cloud provider can mean having access to the very latest in security technologies, even more granular control and faster response than is possible with security in traditional networks. As a first step, security stakeholders need to understand how sensitive and compliance intense their cloud-hosted workloads and data are.  They should then opt for access controls that limit use to only that which is business appropriate and emulate those access policies already in place for premised workloads. Enrolling in cloud workload access monitoring will also ensure that any events which are a deviation from desired security policies can be flagged as indicators of possible compromise. Cloud users should also be familiar with the security technologies offered by their provider whether native or through partnership. This gives cloud users options for implementing the kind of multi-tiered security architecture required to ensure least privilege access, inspect content and respond to potential threats.

Key takeaways

  • Monitor workload access and security policies in place
  • Identify deviations from security policies and indicators of possible compromise
  • Deploy new security controls appropriate for your cloud environment

2. SaaS – Whether a business is hosting critical workloads in the public cloud or not, its employees are surely using applications there. The convenience and ubiquity of these applications means broad user adoption for the ease of information sharing and collaboration they enable. As a result, important, security and compliance intense data maybe making its way to the public cloud without security stakeholder knowledge. The question from businesses then is:  “How do I protect my corporate data?”

Organizations want to make sure their employees are as productive as they can be. To that end many are allowing them to bring their own devices and even their own applications into the network. This agility comes with some added security risk. Fortunately, there are ways to mitigate it. Ultimately the goal is to derive all of the benefits these SaaS applications offer without violating company use and compliance policies for data sharing and storage. Additionally, firms must ensure that employees’ use of SaaS apps does not unwittingly enable data exfiltration by bad actors. Limiting risk comes down to enacting a few of the basics that ensure safe use. For starters, there’s a need to identify which SaaS applications are in use in the network and whether they are in line with company policy or on a safe list. Granular access rights management will limit the use of even the safe apps to those persons who have a business need for them. Where possible, policies should be in place that require data to be encrypted when at rest, especially if it is being stored in the cloud. Having the ability to periodically update the safe lists of apps and monitor all use, can potentially alert security administrators when those applications which are unsanctioned appear among an organization’s communications. With these types of facilities in place stakeholders maybe be promptly alerted to unsanctioned application use. At times, unwanted application use will be detected. This is the time to block those applications, modify or deprecate privileges allowing access to them and as a further precaution remotely wipe or delete data stored through use of those applications.

Key takeaways

  • Apply rights management, identify unsanctioned apps, contain, classify and encrypt data
  • Be notified of unauthorized data access or attempts
  • Block suspicious apps, revoke unauthorized access and remotely wipe company data

3. Devices – Smartphones, tablets, self- sourced laptops, these are the new network perimeter and at times its weakest links.  Whether owned by the organization or not, they most certainly contain business valuable data that is at high risk. Because mobile devices often connect from public networks and may not have the most up to date protections, these endpoints are popular targets for the installation of botnets or malware. Use of personally sourced devices is a new and seemingly permanent reality prompting organizations to broadly ask “How do I keep company information secure?”

Many years ago, risk from mobile devices was ameliorated by installed agents and thick clients that provided security controls right on the device itself in a centralized way. Today, with employee self-sourced devices, the installation of such clients is not always feasible. Still today’s security administrators have to accommodate a heterogeneous end-user device environment comprised of various form factors and OSes while applying consistent and organizationally sanctioned controls to all of them. A cloud-based approach can provide a lot of flexibility and control here. From the cloud, endpoint connectivity to network resources can be centrally managed through security policies that restrict where devices can go based on their security posture, installed protections or location-based access rights. Command of devices from a central location ensures not only consistent policy enforcement but automation so that when anomalous device behaviors or connection patterns are detected, centralized command can restrict access, quarantine the affected device and even wipe it clean so that the threat is fully contained.

Key takeaways

  • Manage company and personal devices to classify and encrypt data to ensure compliance
  • Automatically identify compromised or questionable end points
  • Quickly respond to quarantine, wipe and remediate compromised devices

4. Identity – Despite all of the investments organizations make in security and threat mitigation, identity will be compromised. The latest data tells us that way too many of us click on links and attachments that we should not. From that point on, the bad actor has gained a foothold in the network and may set about moving laterally, looking for sensitive information to steal while impersonating the legitimate user. This common scenario is what makes many businesses ask: “How can I ensure identity protection?”

All of the major cybersecurity reports and indices point to this as the most common component of a data breach – the stolen identity. A security strategy for any organization or business needs to have this as a central tenet. The protection and management of credentials that give resource access to customers, employees, partners and administrators is foundational to sound security practice. Implementing multi-factor authentication broadly for all applications and services is a good starting point. It should nevertheless be complemented by facilities for monitoring authentication and authorization events not only for users but also and especially for privileged users and administrators. This type of monitoring offers the best opportunity to identify attempts by attackers trying to move laterally through privilege escalation. Once flagged as suspicious and anomalous, optional automated response can ensure that access requirements are elevated on the fly and privilege escalation requests are verified as legitimate.

Key takeaways

  • Augment passwords with additional authentication layers
  • Identify breaches early through proactive notification of suspicious behavior
  • Automatically elevate access requirements based on your policy and provide risk-based conditional access

5. Response – Each year organizations are subjected to tens of thousands of security events making the business of protecting critical assets continuous. Given that threat dwell times are 200 plus days, bad actors have ample opportunity to move “low and slow” throughout networks after the initial compromise. Naturally security administrators and stakeholders are left to ask: “How can I better respond to ongoing threats?”

The potency and frequency of today’s cyber threats requires a security strategy build on the assumption of compromise. A network or device may not be breached today but remains at risk so the process of protecting, detecting and responding to a breach is a continuous one. The data that is being exchanged by end points and shuttled among data centers and hybrid clouds contains a lot of information about the security state of those endpoints and resources. The key to unlocking that intelligence is analytics and specifically the type of analytics that is made possible through machine learning. Having the ability to monitor large amounts of traffic and information in a continuous fashion and unearth anomalous behavior is and will be key to shortening the time to detection of a breach or compromise. Behavioral analytics not only tell us what is out of the norm or unwarranted behavior but also informs of good and desired connectivity. By understanding both anomalous and appropriate traffic patterns, organizations can fine-tune access controls that are just right for enabling business yet limiting risk. Further, with continuous analytics the process of determining the right access controls for the environment at a given time can be as dynamic and responsive as users’ access needs.

Key takeaways

  • Use analysis tools to monitor traffic and search for anomalies
  • Use learnings from behavioral analysis to build a map of entity interactions
  • Practice just in time and just enough access control

In summary, security threats maybe common to businesses and organizations of all types but the way they are addressed can vary greatly. In the modern enterprise driven by mobility and cloud, architecting for security represents an opportunity for unprecedented agility.  With a strategy build on identity as the new perimeter and access to continuous processes to protect, detect and respond to threats, a business can be as secure as it is productive.  Watch the On-demand webinar – Top 5 Security threats – with Julia White and myself to hear more about our approach to cybersecurity or visit us at Microsoft Secure to learn more about Security.

Categories: cybersecurity, security, Tips & Talk Tags:

Keeping Adobe Flash Player

Years ago, Java exploits were a primary attack vector for many attackers looking to infect systems, but more recently, Adobe Flash Player took that mantle.

After accounting for almost half of object detections during some quarters in 2014, Java applets on malicious pages decreased to negligible levels by the end of 2015, owing to a number of changes that have been made to both Java and Internet Explorer over the past two years.

In January 2014, Java Runtime Environment was updated to require all applets running in browsers to be digitally signed by default. Later that year, Microsoft published updates for Internet Explorer versions 8 through 11 that began blocking out-of-date ActiveX controls. Windows 10’s default browser, Microsoft Edge, does not support Java or Active X at all, and other browsers like Google’s Chrome and Mozilla’s Firefox are doing the same.

With defenses against Java attacks gaining the upper hand, Flash Player objects have become the most commonly detected threat hosted on malicious web pages by an overwhelming margin. This type of exploit has led the way in each of the past four quarters, from a low of 93.3 percent in the first quarter of 2015, to an all-time high of 99.2 percent last fall.

Adobe Flash

While this information may be unsettling for security teams whose web sites and applications rely on Flash functionality, it’s clearly an important piece of intelligence. Knowing where attackers are targeting their cyber threats makes it easier to plan mitigations to defend against malicious web pages. It also illustrates the importance of keeping your full technology stack – including Adobe Flash Player – updated. And fortunately, as with Java, modern browser mitigations are beginning to turn the tide against Flash exploits as well.

Both Internet Explorer 11 and Microsoft Edge on Windows 10 help mitigate many web-based attacks. For example, Internet Explorer 11 benefits from IExtension Validation, which can help defend against Adobe Flash malware.

Real-time security software can implement IExtension Validation to block ActiveX controls from loading malicious pages. When Internet Explorer loads a webpage that includes ActiveX controls, the browser calls the security software to scan the HTML and script content on the page before loading the controls themselves. If the security software determines that the page is malicious (for example, if it identifies the page as an exploit kit landing page), it can direct Internet Explorer to prevent individual controls or the entire page from loading.

For a thorough analysis on the state of malware in the latter half of 2015, take a look at our latest Security Intelligence Report. And for a high-level look at the top ten trends and stats that matter most to security professionals right now, be sure and download our 2016 Trends in Cybersecurity e-book.

Keep Microsoft software up to date — and everything else too

September 14th, 2016 No comments

Many of the CIOs and CISOs that I talk to, have, over time, developed mature vulnerability assessment methodologies and security updating processes. But frequently, I find that the focus of these processes is squarely on keeping Microsoft operating systems and browsers up to date. Of course vulnerabilities in popular operating systems or browsers have the potential to affect a broad audience. Another reason for this focus is that Microsoft has made updating relatively easy by offering updates via Windows Update, Microsoft Update, and via various tools like Windows Server Update Services and others.

But data from our latest Security Intelligence Report suggests that customers need to keep all of their software up-to-date, not just Microsoft software.

In the last half of 2015 there were nearly 3,300 vulnerability disclosures across the industry, of which 305 were in Microsoft products. With more than 90 percent of reported vulnerabilities occurring outside the Microsoft portfolio, organizations need to monitor their entire technology stack to minimize their risk.

Microsoft products accounted for less than 10 percent of industrywide vulnerabilities in the second half of 2015.

Microsoft products accounted for less than 10 percent of industrywide vulnerabilities in the second half of 2015.

This is consistent with previous years as well. The software industry worldwide includes thousands of vendors, and historically, vulnerabilities for Microsoft software have accounted for between three and ten percent of disclosures in any six-month period.

To find out what’s happening in the world of software vulnerabilities across your IT environment, take some time to review our latest Security Intelligence Report and the information available through the National Vulnerability Database (NVD), the U.S. government’s repository of standards-based vulnerability management data. And for a high-level look at the top ten trends and stats that matter most to security professionals right now, be sure and download our 2016 Trends in Cybersecurity e-book.

Top security trends in IoT

The continuous connection of smart devices across networks, commonly called the Internet of Things (IoT) is driving a transformation in how enterprises all over the world manage network infrastructure and digital identities.

With such rapid change comes new cybersecurity challenges. Many organizations are hesitant to tap into the power of the IoT due to the complexities and risk associated with managing such a diverse – and sometimes unclear – environment. But it is possible to secure your networks, enhance productivity, and protect customers in this evolving digital landscape.

IoT security doesn’t have to be overwhelming. But it does require a proactive and strategic mindset, and the first step is to understand IoT security trends.

Top trends

IoT offers an expanding horizon of opportunity that shouldn’t be ignored due to security concerns. With foresight into these current trends, practical planning, and persistence implementation, you can move your organization vision for IoT forward with confidence in your security practices.

For insights to help you improve your security posture, visit us at Microsoft Secure.

Categories: cybersecurity, IoT, security, Trends Tags:

Rise in severe vulnerabilities highlights importance of software updates

August 17th, 2016 No comments

In the context of computer security, vulnerabilities are weaknesses in software that could allow an attacker to compromise the integrity, availability, or confidentiality of either the software itself or the system it’s running on. Some of the worst vulnerabilities allow attackers to exploit the compromised system by causing it to run malicious code without the user’s knowledge. The effects of this can range from the annoying (experiencing unwanted pop-up ads) to the catastrophic (leaking sensitive customer information).

For this reason, disclosing vulnerabilities to the public as they are found is an important part of the software industry. It’s an effort that goes well beyond the software companies who develop the code. Disclosures can come from a variety of sources, including publishers of the affected software, security software vendors, independent security researchers, and even malware creators.

Attackers and the malware they create routinely attempt to use unpatched vulnerabilities to compromise and victimize organizations, so it’s imperative that CIOs, CISOs and the rest of an organization’s security team pay close attention to disclosures as they are announced. Doing so can help the security team understand if their IT environment is at increased risk, and whether putting new mitigations in place is warranted.

Industry-wide vulnerability disclosures each half year into the second half of 2015

Industry-wide vulnerability disclosures each half year into the second half of 2015

This year the importance of tracking disclosures was highlighted as vulnerability disclosures across the industry increased 9.4 percent between the first and second half of 2015, to almost 3,300.

Even more troubling, disclosures of high-severity vulnerabilities increased 41.7 percent across the industry in the second half of 2015, to account for 41.8 percent of the total — the largest share for such vulnerabilities in at least three years.

These are the vulnerabilities that security teams dread as they enable attackers to gain easy access to software, PCs, devices, and servers. For organizations that work with sensitive customer data or that must comply with security regulations to maintain contracts, the results of such an infection are potentially dire.

Vendors with a known vulnerability in their products will generally issue a patch to close the door, so staying abreast of those updates is a critical concern for security professionals. With over 6,000 vulnerabilities publicly disclosed per year across the industry, it’s important that organizations assess all software in their IT environment and ensure that it is updated.

For an analysis of vulnerabilities disclosed in the latter half of 2015, take a look at our latest Security Intelligence Report and the information available through the NVD. And for a high-level look at the top 10 trends and stats that matter most to security professionals right now, be sure and download our 2016 Trends in Cybersecurity e-book.

Learn more at Microsoft Secure.

Categories: cybersecurity, security, vulnerabilities Tags:

Managing cloud security: Four key questions to evaluate your security position

As cloud computing and the Internet of Things (IoT) continue to transform the global economy, businesses recognize that securing enterprise data must be viewed as an ongoing process. Securing the ever-expanding volume, variety, and sources of data is not easy; however, with an adaptive mindset, you can achieve persistent and effective cloud security.

The first step is knowing the key risk areas in cloud computing and IoT processes and assessing whether and where your organization may be exposed to data leaks. File sharing solutions improve the way people collaborate but pose a serious point of vulnerability. Mobile workforces decentralize data storage and dissolve traditional business perimeters.

SaaS solutions turn authentication and user identification into an always-on and always-changing topic. Second, it’s worth developing the habit—if you haven’t already—of reviewing and adapting cloud security strategy as an ongoing capability. To that end, here are eight key questions to revisit regularly, four of which we dive deeper into below.

 

Is your security budget scaling appropriately?

Security teams routinely manage numerous security solutions on a daily basis and typically monitor thousands of security alerts. At the same time, they need to keep rapid response practices sharp and ready for deployment in case of a breach. Organizations must regularly verify that sufficient funds are allocated to cover day-to-day security operations as well as rapid, ad hoc responses if and when a breach is detected.

Do you have both visibility into and control of critical business data?

With potential revenue loss from a single breach in the tens of millions of dollars, preventing data leaks is a central pillar of cloud security strategy. Regularly review how, when, where, and by whom your business data is being accessed. Monitoring whether permissions are appropriate for a user’s role and responsibilities as well as for different types of data must be constant.

Are you monitoring shadow IT adequately?

Today, the average employee uses 17 cloud apps, and mobile users access company resources from a wide variety of locations and devices. Remote and mobile work coupled with the increasing variety of cloud-based solutions (often free) raises concerns that traditional on-premises security tools and policies may not provide the level of visibility and control you need. Check whether you can identify mobile device and cloud application users on your network, and monitor changes in usage behavior. To mitigate risks of an accidental data breach, teach current and onboarding employees your organization’s best practices for using ad hoc apps and access.

Is your remote access security policy keeping up?

Traditional remote access technologies build a direct channel between external users and your apps, and that makes it risky to publish internal apps to external users. Your organization needs a secure remote access strategy that will help you manage and protect corporate resources as cloud solutions, platforms, and infrastructures evolve. Consider using automated and adaptive policies to reduce time and resources needed to identify and validate risks.

Checklist

These are just a few questions to get you thinking about recursive, adaptive cloud security. Stay on top of your security game by visiting resources on Microsoft Secure.

Categories: Cloud Computing, IoT, SaaS, security Tags:

Transparency & Trust in the Cloud Series: Cincinnati, Cleveland, Detroit

March 17th, 2015 No comments
 Customers at the Transparency & Trust in the Cloud Series event in Detroit

Customers at the Detroit “Transparency & Trust in the Cloud” event.

I had the opportunity to speak at three additional Transparency & Trust in the Cloud events last week in Cincinnati, Cleveland, and Detroit. These were the latest in the series that Microsoft is hosting, inviting customers to participate in select cities across the US.

For me personally, these events provide the opportunity to connect with customers in each city and learn which security and privacy challenges are top of mind for them. In addition, I get to hear first-hand, how customers have been using the Cloud to drive their businesses forward, or, if they haven’t yet adopted Cloud services, what’s holding them back. I feel very fortunate as the participating CIOs, their in-house lawyers, CISOs, and IT operations leaders haven’t been shy about sharing the expectations they have for prospective Cloud Providers, specifically around security, privacy, and compliance.

I was joined by other Microsoft Cloud subject matter experts: Microsoft’s Assistant General Counsel, Dennis Garcia, Principal IT Solution Manager, Maya Davis, Director of Audit and Compliance, Gabi Gustaf, and Cloud Architect, Delbert Murphy. This diverse cast helped provide an overview of the Microsoft Trustworthy Cloud Initiative from their unique perspectives and answer a range of technology, business process, and legal questions from attendees.

Here are just some of the types of questions these events garner, most recently in these three cities:

  • How does eDiscovery work in Microsoft’s Cloud? (see related posts)
  • What data loss prevention capabilities does Microsoft offer for Office 365, OneDrive and Microsoft Azure?
  • What data does Microsoft share with customers during incident response investigations?
  • Which audit reports does Microsoft provide to its Cloud customers?
  • What terms does Microsoft include in its Cloud contracts to help customers manage regulatory compliance obligations in EU nations?
  • What does the new ISO 27018 privacy certification that Microsoft has achieved for its four major Cloud solutions provide to Microsoft’s Cloud customers (and Microsoft is the only major Cloud provider to achieve ISO 27018 certification)?

These are great conversations! Thank you to all of the customers that have attended and participated in recent events.

There are still a few more scheduled in different cities across the country. If you are a customer and would like to learn more about the Microsoft approach to building the industry’s most trustworthy Cloud, please reach out to your account team to find out if one of these events is coming to your area.

I’m looking forward to seeing customers in Omaha and Des Moines in just a couple of weeks.

Transparency & Trust in the Cloud Series: Kansas City, St. Louis, Minneapolis

March 5th, 2015 No comments

Over the last few months, Microsoft has hosted a series of events to bring together Chief Information Officers (CIO) and their legal counsels, Chief Information Security Officers (CISO), as well as IT operations leaders from enterprises in cities across the US. These “Transparency & Trust in the Cloud” events aim to highlight and discuss the security, privacy, compliance, and transparency capabilities of Microsoft’s cloud services.

Recently, I was given the opportunity to attend and speak at those in Kansas City, St. Louis, and Minneapolis. I was also able speak directly with many enterprise customers in each city. I was joined by other Microsoft cloud subject matter experts, where together, we answered a range of technology, business process, and legal questions that attendees had—and believe me, they had some well-thought, complex questions!

For example, in Kansas City, attendees asked about service level agreements and were provided with the Microsoft perspective by our Assistant General Counsel, Dennis Garcia. In St. Louis, we were asked about Microsoft’s own journey to move workloads and applications from on premise to the cloud. Ryan Reed, from Microsoft IT, has been doing this work at Microsoft for some time, and shared architectural and development considerations with the audience. Enterprise customers in Minneapolis asked questions ranging from eDiscovery to security incident notifications, to the right to audit, to protecting sensitive healthcare information. These discussions are also extremely helpful to us, at Microsoft, to better understand which topics are top of mind for enterprise customers who are evaluating the use of or adopting cloud services.

I would like to again thank those customers who attended these events. Thank-you!

More meetings like these have been scheduled in different cities across the country. If you are a CIO, CISO, legal counsel, or operations leader for an enterprise organization and would like to learn more about the Microsoft approach to building the industry’s most trustworthy cloud, please reach out to your account team to inquire.

I’m looking forward to meeting more customers and having deeper discussions on trust and transparency in the cloud in the coming weeks.

Interview on “Taste of Premier” about Security Guidance for Windows 8.1, Windows Server 2012 R2 and IE 11

October 22nd, 2014 No comments

Aaron Margosis interviewed on Channel 9's Taste of Premier about Security Guidance for Windows 8.1, Windows Server 2012 R2 and IE 11:
http://channel9.msdn.com/Blogs/Taste-of-Premier/Taste-of-Premier-Security-Guidance-for-Windows-8-1-Windows-Server…(read more)

Interview on “Taste of Premier” about Security Guidance for Windows 8.1, Windows Server 2012 R2 and IE 11

October 22nd, 2014 No comments

Aaron Margosis interviewed on Channel 9's Taste of Premier about Security Guidance for Windows 8.1, Windows Server 2012 R2 and IE 11:
http://channel9.msdn.com/Blogs/Taste-of-Premier/Taste-of-Premier-Security-Guidance-for-Windows-8-1-Windows-Server…(read more)