Archive

Archive for the ‘Information/data protection’ Category

Mitigating vulnerabilities in endpoint network stacks

May 4th, 2020 No comments

The skyrocketing demand for tools that enable real-time collaboration, remote desktops for accessing company information, and other services that enable remote work underlines the tremendous importance of building and shipping secure products and services. While this is magnified as organizations are forced to adapt to the new environment created by the global crisis, it’s not a new imperative. Microsoft has been investing heavily in security, and over the years our commitment to building proactive security into products and services has only intensified.

To help deliver on this commitment, we continuously find ways to improve and secure Microsoft products. One aspect of our proactive security work is finding vulnerabilities and fixing them before they can be exploited. Our strategy is to take a holistic approach and drive security throughout the engineering lifecycle. We do this by:

  • Building security early into the design of features.
  • Developing tools and processes that proactively find vulnerabilities in code.
  • Introducing mitigations into Windows that make bugs significantly harder to exploit.
  • Having our world-class penetration testing team test the security boundaries of the product so we can fix issues before they can impact customers.

This proactive work ensures we are continuously making Windows safer and finding as many issues as possible before attackers can take advantage of them. In this blog post we will discuss a recent vulnerability that we proactively found and fixed and provide details on tools and techniques we used, including a new set of tools that we built internally at Microsoft. Our penetration testing team is constantly testing the security boundaries of the product to make it more secure, and we are always developing tools that help them scale and be more effective based on the evolving threat landscape. Our investment in fuzzing is the cornerstone of our work, and we are constantly innovating this tech to keep on breaking new ground.

Proactive security to prevent the next WannaCry

In the past few years, much of our team’s efforts have been focused on uncovering remote network vulnerabilities and preventing events like the WannaCry and NotPetya outbreaks. Some bugs we have recently found and fixed include critical vulnerabilities that could be leveraged to exploit common secure remote communication tools like RDP or create ransomware issues like WannaCry: CVE-2019-1181 and CVE-2019-1182 dubbed “DejaBlue“, CVE-2019-1226 (RCE in RDP Server), CVE-2020-0611 (RCE in RDP Client), and CVE-2019-0787 (RCE in RDP client), among others.

One of the biggest challenges we regularly face in these efforts is the sheer volume of code we analyze. Windows is enormous and continuously evolving 5.7 million source code files, with more than 3,500 developers doing 1,100 pull requests per day in 440 official branches. This rapid cadence and evolution allows us to add new features as well proactively drive security into Windows.

Like many security teams, we frequently turn to fuzzing to help us quickly explore and assess large codebases. Innovations we’ve made in our fuzzing technology have made it possible to get deeper coverage than ever before, resulting in the discovery of new bugs, faster. One such vulnerability is the remote code vulnerability (RCE) in Microsoft Server Message Block version 3 (SMBv3) tracked as CVE-2020-0796 and fixed on March 12, 2020.

In the following sections, we will share the tools and techniques we used to fuzz SMB, the root cause of the RCE vulnerability, and relevant mitigations to exploitation.

Fully deterministic person-in-the-middle fuzzing

We use a custom deterministic full system emulator tool we call “TKO” to fuzz and introspect Windows components.  TKO provides the capability to perform full system emulation and memory snapshottting, as well as other innovations.  As a result of its unique design, TKO provides several unique benefits to SMB network fuzzing:

  • The ability to snapshot and fuzz forward from any program state.
  • Efficiently restoring to the initial state for fast iteration.
  • Collecting complete code coverage across all processes.
  • Leveraging greater introspection into the system without too much perturbation.

While all of these actions are possible using other tools, our ability to seamlessly leverage them across both user and kernel mode drastically reduces the spin-up time for targets. To learn more, check out David Weston’s recent BlueHat IL presentation “Keeping Windows secure”, which touches on fuzzing, as well as the TKO tool and infrastructure.

Fuzzing SMB

Given the ubiquity of SMB and the impact demonstrated by SMB bugs in the past, assessing this network transfer protocol has been a priority for our team. While there have been past audits and fuzzers thrown against the SMB codebase, some of which postdate the current SMB version, TKO’s new capabilities and functionalities made it worthwhile to revisit the codebase. Additionally, even though the SMB version number has remained static, the code has not! These factors played into our decision to assess the SMB client/server stack.

After performing an initial audit pass of the code to understand its structure and dataflow, as well as to get a grasp of the size of the protocol’s state space, we had the information we needed to start fuzzing.

We used TKO to set up a fully deterministic feedback-based fuzzer with a combination of generated and mutated SMB protocol traffic. Our goal for generating or mutating across multiple packets was to dig deeper into the protocol’s state machine. Normally this would introduce difficulties in reproducing any issues found; however, our use of emulators made this a non-issue. New generated or mutated inputs that triggered new coverage were saved to the input corpus. Our team had a number of basic mutator libraries for different scenarios, but we needed to implement a generator. Additionally, we enabled some of the traditional Windows heap instrumentation using verifier, turning on page heap for SMB-related drivers.

We began work on the SMBv2 protocol generator and took a network capture of an SMB negotiation with the aim of replaying these packets with mutations against a Windows 10, version 1903 client. We added a mutator with basic mutations (e.g., bit flips, insertions, deletions, etc.) to our fuzzer and kicked off an initial run while we continued to improve and develop further.

Figure 1. TKO fuzzing workflow

A short time later, we came back to some compelling results. Replaying the first crashing input with TKO’s kdnet plugin revealed the following stack trace:

> tkofuzz.exe repro inputs\crash_6a492.txt -- kdnet:conn 127.0.0.1:50002

Figure 2. Windbg stack trace of crash

We found an access violation in srv2!Smb2CompressionDecompress.

Finding the root cause of the crash

While the stack trace suggested that a vulnerability exists in the decompression routine, it’s the parsing of length counters and offsets from the network that causes the crash. The last packet in the transaction needed to trigger the crash has ‘\xfcSMB’ set as the first bytes in its header, making it a COMPRESSION_TRANSFORM packet.

Figure 3. COMPRESSION_TRANSFORM packet details

The SMBv2 COMPRESSION_TRANSFORM packet starts with a COMPRESSION_TRANSFORM_HEADER, which defines where in the packet the compressed bytes begin and the length of the compressed buffer.

typedef struct _COMPRESSION_TRANSFORM_HEADER

{

UCHAR   Protocol[4]; // Contains 0xFC, 'S', 'M', 'B'

ULONG    OriginalMessageSize;

USHORT AlgorithmId;

USHORT Flags;

ULONG Length;

}

In the srv2!Srv2DecompressData in the graph below, we can find this COMPRESSION_TRANSFORM_HEADER struct being parsed out of the network packet and used to determine pointers being passed to srv2!SMBCompressionDecompress.

Figure 4. Srv2DecompressData graph

We can see that at 0x7e94, rax points to our network buffer, and the buffer is copied to the stack before the OriginalCompressedSegmentSize and Length are parsed out and added together at 0x7ED7 to determine the size of the resulting decompressed bytes buffer. Overflowing this value causes the decompression to write its results out of the bounds of the destination SrvNet buffer, in an out-of-bounds write (OOBW).

Figure 5. Overflow condition

Looking further, we can see that the Length field is parsed into esi at 0x7F04, added to the network buffer pointer, and passed to CompressionDecompress as the source pointer. As Length is never checked against the actual number of received bytes, it can cause decompression to read off the end of the received network buffer. Setting this Length to be greater than the packet length also causes the computed source buffer length passed to SmbCompressionDecompress to underflow at 0x7F18, creating an out-of-bounds read (OOBR) vulnerability. Combining this OOBR vulnerability with the previous OOBW vulnerability creates the necessary conditions to leak addresses and create a complete remote code execution exploit.

Figure 6. Underflow condition

Windows 10 mitigations against remote network vulnerabilities

Our discovery of the SMBv3 vulnerability highlights the importance of revisiting protocol stacks regularly as our tools and techniques continue to improve over time. In addition to the proactive hunting for these types of issues, the investments we made in the last several years to harden Windows 10 through mitigations like address space layout randomization (ASLR), Control Flow Guard (CFG), InitAll, and hypervisor-enforced code integrity (HVCI) hinder trivial exploitation and buy defenders time to patch and protect their networks.

For example, turning vulnerabilities like the ones discovered in SMBv3 into working exploits requires finding writeable kernel pages at reliable addresses, a task that requires heap grooming and corruption, or a separate vulnerability in Windows kernel address space layout randomization (ASLR). Typical heap-based exploits taking advantage of a vulnerability like the one described here would also need to make use of other allocations, but Windows 10 pool hardening helps mitigate this technique. These mitigations work together and have a cumulative effect when combined, increasing the development time and cost of reliable exploitation.

Assuming attackers gain knowledge of our address space, indirect jumps are mitigated by kernel-mode CFG. This forces attackers to either use data-only corruption or bypass Control Flow Guard via stack corruption or yet another bug. If virtualization-based security (VBS) and HVCI are enabled, attackers are further constrained in their ability to map and modify memory permissions.

On Secured-core PCs these mitigations are enabled by default.  Secured-core PCs combine virtualization, operating system, and hardware and firmware protection. Along with Microsoft Defender Advanced Threat Protection, Secured-core PCs provide end-to-end protection against advanced threats.

While these mitigations collectively lower the chances of successful exploitation, we continue to deepen our investment in identifying and fixing vulnerabilities before they can get into the hands of adversaries.

 

The post Mitigating vulnerabilities in endpoint network stacks appeared first on Microsoft Security.

Zero Trust Deployment Guide for Microsoft Azure Active Directory

April 30th, 2020 No comments

Microsoft is providing a series of deployment guides for customers who have engaged in a Zero Trust security strategy. In this guide, we cover how to deploy and configure Azure Active Directory (Azure AD) capabilities to support your Zero Trust security strategy.

For simplicity, this document will focus on ideal deployments and configuration. We will call out the integrations that need Microsoft products other than Azure AD and we will note the licensing needed within Azure AD (Premium P1 vs P2), but we will not describe multiple solutions (one with a lower license and one with a higher license).

Azure AD at the heart of your Zero Trust strategy

Azure AD provides critical functionality for your Zero Trust strategy. It enables strong authentication, a point of integration for device security, and the core of your user-centric policies to guarantee least-privileged access. Azure AD’s Conditional Access capabilities are the policy decision point for access to resources based on user identity, environment, device health, and risk—verified explicitly at the point of access. In the following sections, we will showcase how you can implement your Zero Trust strategy with Azure AD.

Establish your identity foundation with Azure AD

A Zero Trust strategy requires that we verify explicitly, use least privileged access principles, and assume breach. Azure Active Directory can act as the policy decision point to enforce your access policies based on insights on the user, device, target resource, and environment. To do this, we need to put Azure Active Directory in the path of every access request—connecting every user and every app or resource through this identity control plane. In addition to productivity gains and improved user experiences from single sign-on (SSO) and consistent policy guardrails, connecting all users and apps provides Azure AD with the signal to make the best possible decisions about the authentication/authorization risk.

  • Connect your users, groups, and devices:
    Maintaining a healthy pipeline of your employees’ identities as well as the necessary security artifacts (groups for authorization and devices for extra access policy controls) puts you in the best place to use consistent identities and controls, which your users already benefit from on-premises and in the cloud:

    1. Start by choosing the right authentication option for your organization. While we strongly prefer to use an authentication method that primarily uses Azure AD (to provide you the best brute force, DDoS, and password spray protection), follow our guidance on making the decision that’s right for your organization and your compliance needs.
    2. Only bring the identities you absolutely need. For example, use going to the cloud as an opportunity to leave behind service accounts that only make sense on-premises; leave on-premises privileged roles behind (more on that under privileged access), etc.
    3. If your enterprise has more than 100,000 users, groups, and devices combined, we recommend you follow our guidance building a high performance sync box that will keep your life cycle up-to-date.
  • Integrate all your applications with Azure AD:
    As mentioned earlier, SSO is not only a convenient feature for your users, but it’s also a security posture, as it prevents users from leaving copies of their credentials in various apps and helps avoid them getting used to surrendering their credentials due to excessive prompting. Make sure you do not have multiple IAM engines in your environment. Not only does this diminish the amount of signal that Azure AD sees and allow bad actors to live in the seams between the two IAM engines, it can also lead to poor user experience and your business partners becoming the first doubters of your Zero Trust strategy. Azure AD supports a variety of ways you can bring apps to authenticate with it:

    1. Integrate modern enterprise applications that speak OAuth2.0 or SAML.
    2. For Kerberos and Form-based auth applications, you can integrate them using the Azure AD Application Proxy.
    3. If you publish your legacy applications using application delivery networks/controllers, Azure AD is able to integrate with most of the major ones (such as Citrix, Akamai, F5, etc.).
    4. To help migrate your apps off of existing/older IAM engines, we provide a number of resources—including tools to help you discover and migrate apps off of ADFS.
  • Automate provisioning to applications:
    Once you have your users’ identities in Azure AD, you can now use Azure AD to power pushing those user identities into your various cloud applications. This gives you a tighter identity lifecycle integration within those apps. Use this detailed guide to deploy provisioning into your SaaS applications.
  • Get your logging and reporting in order:
    As you build your estate in Azure AD with authentication, authorization, and provisioning, it’s important to have strong operational insights into what is happening in the directory. Follow this guide to learn how to to persist and analyze the logs from Azure AD either in Azure or using a SIEM system of choice.

Enacting the 1st principle: least privilege

Giving the right access at the right time to only those who need it is at the heart of a Zero Trust philosophy:

  • Plan your Conditional Access deployment:
    Planning your Conditional Access policies in advance and having a set of active and fallback policies is a foundational pillar of your Access Policy enforcement in a Zero Trust deployment. Take the time to configure your trusted IP locations in your environment. Even if you do not use them in a Conditional Access policy, configure these IPs informs the risk of Identity Protection mentioned above. Check out our deployment guidance and best practices for resilient Conditional Access policies.
  • Secure privileged access with privileged identity management:
    With privileged access, you generally take a different track to meeting the end users where they are most likely to need and use the data. You typically want to control the devices, conditions, and credentials that users use to access privileged operations/roles. Check out our detailed guidance on how to take control of your privileged identities and secure them. Keep in mind that in a digitally transformed organization, privileged access is not only administrative access, but also application owner or developer access that can change the way your mission critical apps run and handle data. Check out our detailed guide on how to use Privileged Identity Management (P2) to secure privileged identities.
  • Restrict user consent to applications:
    User consent to applications is a very common way for modern applications to get access to organizational resources. However, we recommend you restrict user consent and manage consent requests to ensure that no unnecessary exposure of your organization’s data to apps occurs. This also means that you need to review prior/existing consent in your organization for any excessive or malicious consent.
  • Manage entitlements (Azure AD Premium P2):
    With applications centrally authenticating and driven from Azure AD, you should streamline your access request, approval, and recertification process to make sure that the right people have the right access and that you have a trail of why users in your organization have the access they have. Using entitlement management, you can create access packages that they can request as they join different teams/project and that would assign them access to the associated resources (applications, SharePoint sites, group memberships). Check out how you can start a package. If deploying entitlement management is not possible for your organization at this time, we recommend you at least enable self-service paradigms in your organization by deploying self-service group management and self-service application access.

Enacting the 2nd principle: verify explicitly

Provide Azure AD with a rich set of credentials and controls that it can use to verify the user at all times.

  • Roll out Azure multi-factor authentication (MFA) (P1):
    This is a foundational piece of reducing user session risk. As users appear on new devices and from new locations, being able to respond to an MFA challenge is one of the most direct ways that your users can teach us that these are familiar devices/locations as they move around the world (without having administrators parse individual signals). Check out this deployment guide.
  • Enable Azure AD Hybrid Join or Azure AD Join:
    If you are managing the user’s laptop/computer, bringing that information into Azure AD and use it to help make better decisions. For example, you may choose to allow rich client access to data (clients that have offline copies on the computer) if you know the user is coming from a machine that your organization controls and manages. If you do not bring this in, you will likely choose to block access from rich clients, which may result in your users working around your security or using Shadow IT. Check out our resources for Azure AD Hybrid Join or Azure AD Join.
  • Enable Microsoft Intune for managing your users’ mobile devices (EMS):
    The same can be said about user mobile devices as laptops. The more you know about them (patch level, jailbroken, rooted, etc.) the more you are able to trust or mistrust them and provide a rationale for why you block/allow access. Check out our Intune device enrollment guide to get started.
  • Start rolling out passwordless credentials:
    With Azure AD now supporting FIDO 2.0 and passwordless phone sign-in, you can move the needle on the credentials that your users (especially sensitive/privileged users) are using on a day-to-day basis. These credentials are strong authentication factors that can mitigate risk as well. Our passwordless authentication deployment guide walks you through how to roll out passwordless credentials in your organization.

Enacting the 3rd principle: assume breach

Provide Azure AD with a rich set of credentials and controls that it can use to verify the user.

  • Deploy Azure AD Password Protection:
    While enabling other methods to verify users explicitly, you should not forget about weak passwords, password spray and breach replay attacks. Read this blog to find out why classic complex password policies are not tackling the most prevalent password attacks. Then follow this guidance to enable Azure AD Password Protection for your users in the cloud first and then on-premises as well.
  • Block legacy authentication:
    One of the most common attack vectors for malicious actors is to use stolen/replayed credentials against legacy protocols, such as SMTP, that cannot do modern security challenges. We recommend you block legacy authentication in your organization.
  • Enable identity protection (Azure AD Premium 2):
    Enabling identity protection for your users will provide you with more granular session/user risk signal. You’ll be able to investigate risk and confirm compromise or dismiss the signal which will help the engine understand better what risk looks like in your environment.
  • Enable restricted session to use in access decisions:
    To illustrate, let’s take a look at controls in Exchange Online and SharePoint Online (P1): When a user’s risk is low but they are signing in from an unknown device, you may want to allow them access to critical resources, but not allow them to do things that leave your organization in a non-compliant state. Now you can configure Exchange Online and SharePoint Online to offer the user a restricted session that allows them to read emails or view files, but not download them and save them on an untrusted device. Check out our guides for enabling limited access with SharePoint Online and Exchange Online.
  • Enable Conditional Access integration with Microsoft Cloud App Security (MCAS) (E5):
    Using signals emitted after authentication and with MCAS proxying requests to application, you will be able to monitor sessions going to SaaS Applications and enforce restrictions. Check out our MCAS and Conditional Access integration guidance and see how this can even be extended to on-premises apps.
  • Enable Microsoft Cloud App Security (MCAS) integration with identity protection (E5):
    Microsoft Cloud App Security is a UEBA product monitoring user behavior inside SaaS and modern applications. This gives Azure AD signal and awareness about what happened to the user after they authenticated and received a token. If the user pattern starts to look suspicious (user starts to download gigabytes of data from OneDrive or starts to send spam emails in Exchange Online), then a signal can be fed to Azure AD notifying it that the user seems to be compromised or high risk and on the next access request from this user; Azure AD can take correct action to verify the user or block them. Just enabling MCAS monitoring will enrich the identity protection signal. Check out our integration guidance to get started.
  • Integrate Azure Advanced Threat Protection (ATP) with Microsoft Cloud App Security:
    Once you’ve successfully deployed and configured Azure ATP, enable the integration with Microsoft Cloud App Security to bring on-premises signal into the risk signal we know about the user. This enables Azure AD to know that a user is indulging in risky behavior while accessing on-premises, non-modern resources (like File Shares) which can then be factored into overall user risk to block further access in the cloud. You will be able to see a combined Priority Score for each user at risk to give a holistic view of which ones your SOC should focus on.
  • Enable Microsoft Defender ATP (E5):
    Microsoft Defender ATP allows you to attest to Windows machines health and whether they are undergoing a compromise and feed that into mitigating risk at runtime. Whereas Domain Join gives you a sense of control, Defender ATP allows you to react to a malware attack at near real time by detecting patterns where multiple user devices are hitting untrustworthy sites and react by raising their device/user risk at runtime. See our guidance on configuring Conditional Access in Defender ATP.

Conclusion

We hope the above guides help you deploy the identity pieces central to a successful Zero Trust strategy. Make sure to check out the other deployment guides in the series by following the Microsoft Security blog.

The post Zero Trust Deployment Guide for Microsoft Azure Active Directory appeared first on Microsoft Security.

Defending the power grid against supply chain attacks: Part 3 – Risk management strategies for the utilities industry

April 22nd, 2020 No comments

Over the last fifteen years, attacks against critical infrastructure (figure1) have steadily increased in both volume and sophistication. Because of the strategic importance of this industry to national security and economic stability, these organizations are targeted by sophisticated, patient, and well-funded adversaries.  Adversaries often target the utility supply chain to insert malware into devices destined for the power grid. As modern infrastructure becomes more reliant on connected devices, the power industry must continue to come together to improve security at every step of the process.

Aerial view of port and freeways leading to downtown Singapore.

Figure 1: Increased attacks on critical infrastructure

This is the third and final post in the “Defending the power grid against supply chain attacks” series. In the first blog I described the nature of the risk. Last month I outlined how utility suppliers can better secure the devices they manufacture. Today’s advice is directed at the utilities. There are actions you can take as individual companies and as an industry to reduce risk.

Implement operational technology security best practices

According to Verizon’s 2019 Data Breach Investigations Report, 80 percent of hacking-related breaches are the result of weak or compromised passwords. If you haven’t implemented multi-factor authentication (MFA) for all your user accounts, make it a priority. MFA can significantly reduce the likelihood that a user with a stolen password can access your company assets. I also recommend you take these additional steps to protect administrator accounts:

  • Separate administrative accounts from the accounts that IT professionals use to conduct routine business. While administrators are answering emails or conducting other productivity tasks, they may be targeted by a phishing campaign. You don’t want them signed into a privileged account when this happens.
  • Apply just-in-time privileges to your administrator accounts. Just-in-time privileges require that administrators only sign into a privileged account when they need to perform a specific administrative task. These sign-ins go through an approval process and have a time limit. This will reduce the possibility that someone is unnecessarily signed into an administrative account.

 

Image 2

Figure 2: A “blue” path depicts how a standard user account is used for non-privileged access to resources like email and web browsing and day-to-day work. A “red” path shows how privileged access occurs on a hardened device to reduce the risk of phishing and other web and email attacks. 

  • You also don’t want the occasional security mistake like clicking on a link when administrators are tired or distracted to compromise the workstation that has direct access to these critical systems.  Set up privileged access workstations for administrative work. A privileged access workstation provides a dedicated operating system with the strongest security controls for sensitive tasks. This protects these activities and accounts from the internet. To encourage administrators to follow security practices, make sure they have easy access to a standard workstation for other more routine tasks.

The following security best practices will also reduce your risk:

  • Whitelist approved applications. Define the list of software applications and executables that are approved to be on your networks. Block everything else. Your organization should especially target systems that are internet facing as well as Human-Machine Interface (HMI) systems that play the critical role of managing generation, transmission, or distribution of electricity
  • Regularly patch software and operating systems. Implement a monthly practice to apply security patches to software on all your systems. This includes applications and Operating Systems on servers, desktop computers, mobile devices, network devices (routers, switches, firewalls, etc.), as well as Internet of Thing (IoT) and Industrial Internet of Thing (IIoT) devices. Attackers frequently target known security vulnerabilities.
  • Protect legacy systems. Segment legacy systems that can no longer be patched by using firewalls to filter out unnecessary traffic. Limit access to only those who need it by using Just In Time and Just Enough Access principles and requiring MFA. Once you set up these subnets, firewalls, and firewall rules to protect the isolated systems, you must continually audit and test these controls for inadvertent changes, and validate with penetration testing and red teaming to identify rogue bridging endpoint and design/implementation weaknesses.
  • Segment your networks. If you are attacked, it’s important to limit the damage. By segmenting your network, you make it harder for an attacker to compromise more than one critical site. Maintain your corporate network on its own network with limited to no connection to critical sites like generation and transmission networks. Run each generating site on its own network with no connection to other generating sites. This will ensure that should a generating site become compromised, attackers can’t easily traverse to other sites and have a greater impact.
  • Turn off all unnecessary services. Confirm that none of your software has automatically enabled a service you don’t need. You may also discover that there are services running that you no longer use. If the business doesn’t need a service, turn it off.
  • Deploy threat protection solutions. Services like Microsoft Threat Protection help you automatically detect, respond to, and correlate incidents across domains.
  • Implement an incident response plan: When an attack happens, you need to respond quickly to reduce the damage and get your organization back up and running. Refer to Microsoft’s Incident Response Reference Guide for more details.

Speak with one voice

Power grids are interconnected systems of generating plants, wires, transformers, and substations. Regional electrical companies work together to efficiently balance the supply and demand for electricity across the nation. These same organizations have also come together to protect the grid from attack. As an industry, working through organizations like the Edison Electric Institute (EEI), utilities can define security standards and hold manufacturers accountable to those requirements.

It may also be useful to work with The Federal Energy Regulatory Committee (FERC), The North American Electric Reliability Corporation (NERC), or The United States Nuclear Regulatory Commission (U.S. NRC) to better regulate the security requirements of products manufactured for the electrical grid.

Apply extra scrutiny to IoT devices

As you purchase and deploy IoT devices, prioritize security. Be careful about purchasing products from countries that are motivated to infiltrate critical infrastructure. Conduct penetration tests against all new IoT and IIoT devices before you connect them to the network. When you place sensors on the grid, you’ll need to protect them from both cyberattacks and physical attacks. Make them hard to reach and tamper-proof.

Collaborate on solutions

Reducing the risk of a destabilizing power grid attack will require everyone in the utility industry to play a role. By working with manufacturers, trade organizations, and governments, electricity organizations can lead the effort to improve security across the industry. For utilities in the United States, several public-private programs are in place to enhance the utility industry capabilities to defend its infrastructure and respond to threats:

Read Part 1 in the series: “Defending the power grid against cyberattacks

Read “Defending the power grid against supply chain attacks: Part 2 – Securing hardware and software

Read how Microsoft Threat Protection can help you better secure your endpoints.

Learn how MSRC developed an incident response plan

Bookmark the Security blog to keep up with our expert coverage on security matters. For more information about our security solutions visit our website. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Defending the power grid against supply chain attacks: Part 3 – Risk management strategies for the utilities industry appeared first on Microsoft Security.

Security guidance for remote desktop adoption

April 15th, 2020 No comments

As the volume of remote workers quickly increased over the past two to three months, the IT teams in many companies scrambled to figure out how their infrastructures and technologies would be able to handle the increase in remote connections. Many companies were forced to enhance their capabilities to allow remote workers access to systems and applications from their homes and other locations outside the network perimeter. Companies that couldn’t make changes rapidly enough to increase capacity for remote workers might rely on remote access using the remote desktop protocol, which allows employees to access workstations and systems directly.

Recently, John Matherly (founder of Shodan, the world’s first search engine for internet-connected devices) conducted some research on ports that are accessible on the internet, surfacing some important findings. Notably, there has been an increase in the number of systems accessible via the traditional Remote Desktop Protocol (RDP) port and a well-known “alternative” port used for RDP. A surprising finding from John’s research is the ongoing prevalent usage of RDP and its exposure to the internet.

Although Remote Desktop Services (RDS) can be a fast way to enable remote access for employees, there are a number of security challenges that need to be considered before using this as a remote access strategy. One of these challenges is that attackers continue to target the RDP and service, putting corporate networks, systems, and data at risk (e.g., cybercriminals could exploit the protocol to establish a foothold on the network, install ransomware on systems, or take other malicious actions). In addition, there are challenges with being able to configure security for RDP sufficiently, to restrict a cybercriminal from moving laterally and compromising data.

Security considerations for remote desktop include:

  • Direct accessibility of systems on the public internet.
  • Vulnerability and patch management of exposed systems.
  • Internal lateral movement after initial compromise.
  • Multi-factor authentication (MFA).
  • Session security.
  • Controlling, auditing, and logging remote access.

Some of these considerations can be addressed using Microsoft Remote Desktop Services to act as a gateway to grant access to remote desktop systems. The Microsoft Remote Desktop Services gateway uses Secure Sockets Layer (SSL) to encrypt communications and prevents the system hosting the remote desktop protocol services from being directly exposed to the public internet.

Identify RDP use

To identify whether your company is using the Remote Desktop Protocol, you may perform an audit and review of firewall policies and scan internet-exposed address ranges and cloud services you use, to uncover any exposed systems. Firewall rules may be labeled as “Remote Desktop” or “Terminal Services.” The default port for Remote Desktop Services is TCP 3389, but sometimes an alternate port of TCP 3388 might be used if the default configuration has been changed.

Use this guidance to help secure Remote Desktop Services

Remote Desktop Services can be used for session-based virtualization, virtual desktop infrastructure (VDI), or a combination of these two services. Microsoft RDS can be used to help secure on-premises deployments, cloud deployments, and remote services from various Microsoft partners (e.g., Citrix). Leveraging RDS to connect to on-premises systems enhances security by reducing the exposure of systems directly to the internet. Further guidance on establishing Microsoft RDS can be found in our Remote Desktop Services.

On-premises deployments may still have to consider performance and service accessibility depending on internet connectivity provided through the corporate internet connection, as well as the management and maintenance of systems that remain within the physical network.

Leverage Windows Virtual Desktop

Virtual desktop experiences can be enhanced using Windows Virtual Desktop, delivered on Azure. Establishing an environment in Azure simplifies management and offers the ability to scale the virtual desktop and application virtualization services through cloud computing. Leveraging Windows Virtual Desktop foregoes the performance issues associated with on-premises network connections and takes advantage of built-in security and compliance capabilities provided by Azure.

To get more information about setting up, go to our Windows Virtual Desktop product page.

Microsoft documentation on Windows Virtual Desktop offers a tutorial and how-to guide on enabling your Azure tenant for Windows Virtual Desktop and connecting to the virtual desktop environment securely, once it is established.

Secure remote administrator access

Remote Desktop Services are being used not only by employees for remote access, but also by many system developers and administrators to manage cloud and on-premises systems and applications. Allowing administrative access of server and cloud systems directly through RDP elevates the risk because the accounts used for these purposes usually have higher levels of access across systems and environments, including system administrator access. Microsoft Azure helps system administrators to securely access systems using Network Security Groups and Azure Policies. Azure Security Center further enhances secure remote administration of cloud services by allowing “just in time” (JIT) access for administrators.

Attackers target management ports such as SSH and RDP. JIT access helps reduce attack exposure by locking down inbound traffic to Microsoft Azure VMs (Source: Microsoft).

Azure Security Center JIT access enhances security through the following measures:

  • Approval workflow.
  • Automatic removal of access.
  • Restriction on permitted internet IP address.

For more information, visit Azure Security Center JIT.

Evaluate the risk to your organization

Considerations for selection and implementation of a remote access solution should always consider the security posture and risk appetite of your organization. Leveraging remote desktop services offers great flexibility by enabling remote workers to have an experience like that of working in the office, while offering some separation from threats on the endpoints (i.e., user devices, both managed and unmanaged by the organization). At the same time, those benefits should be weighed against the potential threats to the corporate infrastructure (network, systems, and thereby data). Regardless of the remote access implementation your organization uses, it is imperative that you implement best practices around protecting identities and minimizing attack surface to ensure new risks are not introduced.

The post Security guidance for remote desktop adoption appeared first on Microsoft Security.

Protecting your data and maintaining compliance in a remote work environment

April 6th, 2020 No comments

In this difficult time, remote work is becoming the new normal for many companies around the world. Employees are using tools like Microsoft Teams to collaborate, chat, and connect in new ways to try to keep their businesses moving forward amidst the challenging global health crisis. I sincerely hope you and your families are staying safe and healthy.

I have been talking with many of you about the impact today’s environment is having on your organizations. Business continuity is an imperative, and you must rely on your employees to stay connected and productive outside of the traditional digital borders of business. In doing so, identifying and managing potential risks within the organization is critical to safeguarding your data and intellectual property (IP), while supporting a positive company culture.

Because many of you have been asking, here is some guidance for things you can do to take advantage of these capabilities. I’ll focus a lot of the examples on Teams, but many of these features are relevant across Microsoft 365.

Staying secure and compliant

First, knowing where your data resides while employees are working remotely is a vital question, especially for your risk management-focused departments. Data in Teams is encrypted at rest and in transport, and uses secure real-time protocol for video, audio, and desktop sharing.

There are also several tools that help you remain in control and protect sensitive documents and data in Microsoft 365. For example, you can restrict Teams experiences for guests and people outside of your organization. You can also govern the apps to which each user has access.

In addition, we’ve made sure that the Teams service is compliant: to help you answer questions from your auditors, we publish auditor reports on the Service Trust Portal. And we help our customers keep up with evolving regulations and standards with a robust compliance controls framework, which meets some of the most rigorous industry and countries’ regulations requirements.

Applying data loss prevention in Teams

Data loss prevention (DLP) addresses concerns around sensitive information in messages or documents. Setting up DLP policies in Teams can protect your data and take specific actions when sensitive information is shared. For example, suppose that someone attempts to share a document with guests in a Teams channel or chat, and the document contains sensitive information. If you have a DLP policy defined to prevent this, the document won’t open for those users. Note that in this case, your DLP policy must include SharePoint and OneDrive for the protection to be in place.

Applying sensitivity labeling to protect sensitive data

You can also apply a sensitivity label to important documents and associate it with protection policies and actions like encryption, visual marking, and access controls and be assured that the protection will persist with the document throughout its lifecycle, as it is shared among users who are internal or external to your organization.

You can start by allowing users to manually classify emails and documents by applying sensitivity labels based on their assessment of the content and their interpretation of the organizational guidelines. However, users also forget or inaccurately apply labels, especially in these stressful times, so you need a method that will scale to the vast amount of data you have.

To help you to achieve that scale, we are announcing the public preview of automatic classification with sensitivity labels for documents stored on SharePoint Online and OneDrive for Business, and for emails in transit in Exchange Online. The public preview will begin rolling out over the next week. Like with manual classification, you can now set up sensitivity labels to automatically apply to Office files (e.g., PowerPoint, Excel, Word, etc.) and emails based upon organizational policies. In addition to having users manually label files, you can configure auto classification policies in Microsoft 365 services like SharePoint Online, OneDrive, and Exchange Online. These policies can automatically label files at rest and in motion based on the rules you’ve set. Those classifications also apply when those documents are shared via Teams.

Minimize insider risk

We also know that stressful events contribute to the likelihood of insider risks, such as leakages, IP theft, or data harassment. Insider Risk Management looks at activity from across Microsoft 365, including Teams, to identify potential suspicious activity early.

Communication Compliance, part of the new Insider Risk Management solution set in Microsoft 365, leverages machine learning to quickly identify and take action on code of conduct policy violations in company communications channels, including Teams. Communication Compliance reasons over language used in Teams which may indicate issues related to threats (harm to oneself or others). Detecting this type of language in a timely manner not only minimizes the impact of internal risk, but also can go a long way in supporting employee mental health in uncertain times like this.

Enabling simple retention policies

To comply with your organization’s internal policies, industry regulations, or legal needs, all your company information should be properly governed. That means ensuring that all required information is kept, while the data that’s considered a liability and that you’re no longer required to keep is deleted.

You can set up Teams retention policies for chat and channel messages, and you can apply a Teams retention policy to your entire organization or to specific users and teams. When data is subject to a retention policy, users can continue to work with it because the data is retained in place, in its original location. If a user edits or deletes data that’s subject to the retention policy, a copy is saved to a secure location where it’s retained while the policy is in effect.

All data is retained for compliance reasons and is available for eDiscovery until the retention period expires, after which your policy indicates whether to do nothing or delete the data. With a Teams retention policy, when you delete data, it’s permanently deleted from all storage locations on the Teams service.

Staying productive while minimizing risk

Working remotely helps your employees stay healthy, productive, and connected, and you can keep them productive without increasing risk or compromising compliance. For more guidance around supporting a remote work environment in today’s challenging climate, check out our Remote Work or Remote Work Tech Community sites.

The post Protecting your data and maintaining compliance in a remote work environment appeared first on Microsoft Security.

Work remotely, stay secure—guidance for CISOs

March 12th, 2020 No comments

With many employees suddenly working from home, there are things an organization and employees can do to help remain productive without increasing cybersecurity risk.

While employees in this new remote work situation will be thinking about how to stay in touch with colleagues and coworkers using chat applications, shared documents, and replacing planned meetings with conference calls, they may not be thinking about cyberattacks. CISOs and admins need to look urgently at new scenarios and new threat vectors as their organizations become a distributed organization overnight, with less time to make detailed plans or run pilots.

Based on our experiences working with customers who have had to pivot to new working environments quickly, I want to share some of those best practices that help ensure the best protection.

What to do in the short—and longer—term

Enabling official chat tools helps employees know where to congregate for work. If you’re taking advantage of the six months of free premium Microsoft Teams or the removed limits on how many users can join a team or schedule video calls using the “freemium” version, follow these steps for supporting remote work with Teams. The Open for Business Hub lists tools from various vendors that are free to small businesses during the outbreak. Whichever software you pick, provision it to users with Azure Active Directory (Azure AD) and set up single-sign-on, and you won’t have to worry about download links getting emailed around, which could lead to users falling for phishing emails.

You can secure access to cloud applications with Azure AD Conditional Access, protecting those sign-ins with security defaults. Remember to look at any policies you have set already, to make sure they don’t block access for users working from home. For secure collaboration with partners and suppliers, look at Azure AD B2B.

Azure AD Application Proxy publishes on-premises apps for remote availability, and if you use a managed gateway, today we support several partner solutions with secure hybrid access for Azure AD.

While many employees have work laptops they use at home, it’s likely organizations will see an increase in the use of personal devices accessing company data. Using Azure AD Conditional Access and Microsoft Intune app protection policies together helps manage and secure corporate data in approved apps on these personal devices, so employees can remain productive.

Intune automatically discovers new devices as users connect with them, prompting them to register the device and sign in with their company credentials. You could manage more device options, like turning on BitLocker or enforcing password length, without interfering with users’ personal data, like family photos; but be sensitive about these changes and make sure there’s a real risk you’re addressing rather than setting policies just because they’re available.

Read more in Tech Community on ways Azure AD can enable remote work.

You’ve heard me say it time and again when it comes to multi-factor authentication (MFA): 100 percent of your employees, 100 percent of the time. The single best thing you can do to improve security for employees working from home is to turn on MFA. If you don’t already have processes in place, treat this as an emergency pilot and make sure you have support folks ready to help employees who get stuck. As you probably can’t distribute hardware security devices, use Windows Hello biometrics and smartphone authentication apps like Microsoft Authenticator.

Longer term, I recommend security admins consider a program to find and label the most critical data, like Azure Information Protection, so you can track and audit usage when employees work from home. We must not assume that all networks are secure, or that all employees are in fact working from home when working remotely.

Track your Microsoft Secure Score to see how remote working affects your compliance and risk surface. Use Microsoft Defender Advanced Threat Protection (ATP) to look for attackers masquerading as employees working from home, but be aware that access policies looking for changes in user routines may flag legitimate logons from home and coffee shops.

How to help employees

As more organizations adapt to remote work options, supporting employees will require more than just providing tools and enforcing policies. It will be a combination of tools, transparency, and timeliness.

Remote workers have access to data, information, and your network. This increases the temptation for bad actors. Warn your employees to expect more phishing attempts, including targeted spear phishing aimed at high profile credentials. Now is a good time to be diligent, so watch out for urgent requests that break company policy, use emotive language and have details that are slightly wrong—and provide guidance on where to report those suspicious messages.

Establishing a clear communications policy helps employees recognize official messages. For example, video is harder to spoof than email: an official channel like Microsoft Stream could reduce the chance of phishing while making people feel connected. Streaming videos they can view at a convenient time will also help employees juggling personal responsibilities, like school closures or travel schedule changes.

Transparency is key. Some of our most successful customers are also some of our most transparent ones. Employee trust is built on transparency. By providing clear and basic information, including how to protect their devices, will help you and employees stay ahead of threats.

For example, help employees understand why downloading and using consumer or free VPNs is a bad idea. These connections can extract sensitive information from your network without employees realizing. Instead, offer guidance on how to leverage your VPN and how it’s routed through a secure VPN connection.

Employees need a basic understanding of conditional access policies and what their devices need to connect to the corporate network, like up-to-date anti-malware protection. This way employees understand if their access is blocked and how to get the support they need.

Working from home doesn’t mean being isolated. Reassure employees they can be social, stay in touch with colleagues, and still help keep the business secure. Read more about staying productive while working remotely on the Microsoft 365 blog.

The post Work remotely, stay secure—guidance for CISOs appeared first on Microsoft Security.

Data privacy is about more than compliance—it’s about being a good world citizen

January 28th, 2020 No comments

Happy Data Privacy Day! Begun in 2007 in the European Union (E.U.) and adopted by the U.S. in 2008, Data Privacy Day is an international effort to encourage better protection of data and respect for privacy. It’s a timely topic given the recent enactment of the California Consumer Privacy Act (CCPA). Citizens and governments have grown concerned about the amount of information that organizations collect, what they are doing with the data, and ever-increasing security breaches. And frankly, they’re right. It’s time to improve how organizations manage data and protect privacy.

Let’s look at some concrete steps you can take to begin that process in your organization. But first, a little context.

The data privacy landscape

Since Data Privacy Day commenced in 2007, the amount of data we collect has increased exponentially. In fact we generate “2.5 quintillion bytes of data per day!” Unfortunately, we’ve also seen a comparable increase in security incidents. There were 5,183 breaches reported in the first nine months of 2019, exposing a total of 7.9 billion records. According to the RiskBased Data Breach QuickView Report 2019 Q3, “Compared to the 2018 Q3 report, the total number of breaches was up 33.3 percent and the total number of records exposed more than doubled, up 112 percent.”

In response to these numbers, governments across the globe have passed or are debating privacy regulations. A few of the key milestones:

  • Between 1998 and 2000, The E.U. and the U.S. negotiated Safe Harbor, which were privacy principles that governed how to protect data that is transferred across the Atlantic.
  • In 2015, the European Court of Justice overturned Safe Harbor.
  • In 2016, Privacy Shield replaced Safe Harbor and was approved by the courts.
  • In 2018, the General Data Protection Regulation (GDPR) took effect in the E.U.
  • On January 1, 2020, CCPA took effect for businesses that operate in California.

Last year, GDPR levied 27 fines for a total of € 428,545,407 (over $472 million USD). California will also levy fines for violations of CCPA. Compliance is clearly important if your business resides in a region or employs persons in regions protected by privacy regulation. But protecting privacy is also the right thing to do. Companies who stand on the side of protecting the consumer’s data can differentiate themselves and earn customer loyalty.

Don’t build a data privacy program, build a data privacy culture

Before you get started, recognize that improving how your organization manages personal data, means building a culture that respects privacy. Break down siloes and engage people across the company. Legal, Marketing, SecOps, IT, Senior Managers, Human Resources, and others all play a part in protecting data.

Embrace the concept that privacy is a fundamental human rightPrivacy is recognized as a human right in the U.N. Declaration of Human Rights and the International Covenant on Civil and Political Rights, among other treaties. It’s also built into the constitutions and governing documents of many countries. As you prepare your organization to comply with new privacy regulations, let this truth guide your program.

Understand the data you collect, where it is stored, how it is used, and how it is protected—This is vital if you’re affected by CCPA or GDPR, which require that you disclose to users what data you are collecting and how you are using it. You’re also required to provide data or remove it upon customer request. And I’m not just talking about the data that customers submit through a form. If you’re using a tool to track and collect online user behavior that also counts.

This process may uncover unused data. If so, revise your data collection policies to improve the quality of your data.

Determine which regulations apply to your business—Companies within the E.U. that do business with customers within the E.U., or employ E.U. citizens, are subject to GDPR. CPPA applies to companies doing business within California and meet one of the following requirements:

  • A gross annual revenue of more than $25 million.
  • Derive more than 50 percent of their annual income from the sale of California consumer personal information or
  • Buy, sell, or share the personal information of more than 50,000 California consumers annually.

Beyond California and the E.U., India is debating a privacy law, and Brazil’s regulations, Lei Geral de Proteção de Dados (LGPD), will go into effect in August 2020. There are also several privacy laws in Asia that may be relevant.

Hire, train, and connect people across your organization—To comply with privacy regulations, you’ll need processes and people in place to address these two requirements:

  1. Californians and E.U. citizens are guaranteed the right to know what personal information is being collected about them; to know whether their personal information is sold or disclosed and to whom; and to access their personal information.
  2. Organizations will be held accountable to respond to consumers’ personal information access requests within a finite timeframe, for both regulations.

The GDPR requires that all companies hire a Data Protection Officer to ensure compliance with the law. But to create an organization that respects privacy, go beyond compliance. New projects and initiatives should be designed with privacy in mind from the ground up. Marketing will need to include privacy in campaigns, SecOps and IT will need to ensure proper security is in place to protect data that is collected. Build a cross-discipline team with privacy responsibilities, and institute regular training, so that your employees understand how important it is.

Be transparent about your data collection policies—Data regulations require that you make clear your data collection policies and provide users a way to opt out (CCPA) or opt in (GDPR). Your privacy page should let users know why the data collection benefits them, how you will use their data, and to whom you sell it. If they sell personal information, California businesses will need to include a “Do not sell my personal information” call to action on the homepage.

A transparent privacy policy creates an opportunity for you to build trust with your customers. Prove that you support privacy as a human right and communicate your objectives in a clear and understandable way. Done well, this approach can differentiate you from your competitors.

Extend security risk management practices to your supply chain—Both the CCPA and the GDPR require that organizations put practices in place to protect customer data from malicious actors. You also must report breaches in a timely manner. If you’re found in noncompliance, large fees can be levied.

As you implement tools and processes to protect your data, recognize that your supply chain also poses a risk. Hackers attack software updates, software frameworks, libraries, and firmware as a means of infiltrating otherwise vigilant organizations. As you strengthen your security posture to better protect customer data, be sure to understand your entire hardware and software supply chain. Refer to the National Institute of Standards and Technology for best practices. Microsoft guidelines for reducing your risk from open source may also be helpful.

Microsoft can help

Microsoft offers several tools and services to help you comply with regional and country level data privacy regulations, including CCPA and GDPR. Bookmark the Security blog and the Compliance and security series to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity and connect with me on LinkedIn.

The post Data privacy is about more than compliance—it’s about being a good world citizen appeared first on Microsoft Security.

How to balance compliance and security with limited resources

November 5th, 2019 No comments

Today, many organizations still struggle to adhere to General Data Protection Regulation (GDPR) mandates even though this landmark regulation took effect nearly two years ago. A key learning for some: being compliant does not always mean you are secure. Shifting privacy regulations, combined with limited resources like budgets and talent shortages, add to today’s business complexities. I hear this concern time and again as I travel around the world meeting with our customers to share how Microsoft can empower organizations successfully through these challenges.

Most recently, I sat down with Emma Smith, Global Security Director at Vodafone Group to talk about their own best practices when navigating the regulatory environment. Vodafone Group is a global company with mobile operations in 24 countries and partnerships that extend to 42 more. The company also operates fixed broadband operations in 19 markets, with about 700 million customers. This global reach means they must protect a significant amount of data while adhering to multiple requirements.

Emma and her team have put a lot of time and effort into the strategies and tactics that keep Vodafone and its customers compliant no matter where they are in the world. They’ve learned a lot in this process, and she shared these learnings with me as we discussed the need for organizations to be both secure and compliant, in order to best serve our customers and maintain their trust. You can watch our conversation and hear more in our CISO Spotlight episode.

Cybersecurity enables privacy compliance

As you work to balance compliance with security keep in mind that, as Emma said, “There is no privacy without security.” If you have separate teams for privacy and security, it’s important that they’re strategically aligned. People only use technology and services they trust, which is why privacy and security go hand in hand.

Vodafone did a security and privacy assessment across all their big data stores to understand where the high-risk data lives and how to protect it. They were then able to implement the same controls for privacy and security. It’s also important to recognize that you will never be immune from an attack, but you can reduce the damage.

Emma offered three recommendations for balancing security with privacy compliance:

  • Develop a risk framework so you can prioritize your efforts.
  • Communicate regularly with the board and executive team to align on risk appetite.
  • Establish the right security capabilities internally and/or through a mix of partners and third parties.

I couldn’t agree more, as these are also important building blocks for any organization as they work to become operationally resilient.

I also asked Emma for her top five steps for becoming compliant with privacy regulations:

  • Comply with international standards first, then address local rules.
  • Develop a clear, board-approved strategy.
  • Measure progress against your strategy.
  • Develop a prioritized program of work with clear outcomes.
  • Stay abreast of new technologies and new threats.

The simplest way to manage your risk is to minimize the amount of data that you store. Privacy assessments will help you know where the data is and how to protect it. Regional and local laws can provide tools to guide your standards. Protecting online privacy and personal data is a big responsibility, but with a risk management approach, you can go beyond the “letter of the law” to better safeguard data and support online privacy as a human right.

Learn more

Watch my conversation with Emma about balancing security with privacy compliance. To learn more about compliance and GDPR, read Microsoft Cloud safeguards individual privacy.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

CISO Spotlight Series

Address cybersecurity challenges head-on with 10-minute video episodes that discuss cybersecurity problems and solutions from AI to Zero Trust.


Watch today

The post How to balance compliance and security with limited resources appeared first on Microsoft Security.

Microsoft announces new innovations in security, compliance, and identity at Ignite

November 4th, 2019 No comments

Today, at the Microsoft Ignite Conference, we’re announcing new innovations designed to help customers across their security, compliance, and identity needs. With so much going on at Ignite this week, I want to highlight the top 10 announcements:

  1. Azure Sentinel—We’re introducing new connectors in Azure Sentinel to help security analysts collect data from a variety of sources, including Zscaler, Barracuda, and Citrix. In addition, we’re releasing new hunting queries and machine learning-based detections to assist analysts in prioritizing the most important events.
  2. Insider Risk Management in Microsoft 365—We’re announcing a new insider risk management solution in Microsoft 365 to help identify and remediate threats stemming from within an organization. Now in private preview, this new solution leverages the Microsoft Graph along with third-party signals, like HR systems, to identify hidden patterns that traditional methods would likely miss.
  3. Microsoft Authenticator—We’re making Microsoft Authenticator available to customers as part of the Azure Active Directory (Azure AD) free plan. Deploying Multi-Factor Authentication (MFA) reduces the risk of phishing and other identity-based attacks by 99.9 percent.
  4. New value in Azure AD—Previewing at the end of November, Azure AD Connect cloud provisioning is a new lightweight agent to move identities from disconnected Active Directory (AD) forests to the cloud. Additionally, we’re announcing secure hybrid access partnerships with F5 Networks, Zscaler, Citrix, and Akamai to simplify access to legacy-auth based applications. Lastly, we’re introducing a re-imagined MyApps portal to help make apps more discoverable for end-users.
  5. Microsoft Defender Advanced Threat Protection (ATP)—We’re extending our endpoint detection and response capability in Microsoft Defender ATP to include MacOS, now in preview. We’re also planning to add support for Linux servers.
  6. Azure Security Center—We’re announcing new capabilities to find misconfigurations and threats for containers and SQL in IaaS while providing rich vulnerability assessment for virtual machines. Azure Security Center also provides integration with security alerts from partners and quick fixes for fast remediation.
  7. Microsoft information protection and governance—The compliance center in Microsoft 365 now provides the ability to view data classifications categorized by sensitive information types or associated with industry regulations. Machine learning also allows you to use your existing data to train classifiers that are unique to your organization, such as customer records, HR data, and contracts.
  8. Microsoft Compliance Score—Now in public preview, Microsoft Compliance Score helps simplify regulatory complexity and reduce risk. It maps your Microsoft 365 configuration settings to common regulations and standards, providing continuous monitoring and recommended actions to improve your compliance posture.  We’re also introducing a new assessment for the California Consumer Privacy Act (CCPA).
  9. Application Guard for Office—Now available in preview, Application Guard for Office provides hardware-level and container-based protection against potentially malicious Word, Excel, and PowerPoint files. It utilizes Microsoft Defender ATP to establish whether a document is either malicious or trusted.
  10. Azure Firewall Manager—Now in public preview, customers can manage multiple firewall instances from a single pane of glass with Azure Firewall Manager. We’re also creating support for new firewall deployment topologies.

It’s a big week of announcements! More information will follow this blog in the next few days, and we’ll update this post with new content the week progresses.

Microsoft Ignite

Join us online November 4–8, 2019 to livestream keynotes, watch selected sessions on-demand, and more.


Learn more

You can see all of our Microsoft Ignite sessions (live streaming or on-demand) and connect with experts on the Microsoft Tech Community.

The post Microsoft announces new innovations in security, compliance, and identity at Ignite appeared first on Microsoft Security.

IoT security will set innovation free: Azure Sphere general availability scheduled for February 2020

October 28th, 2019 No comments

Today, at the IoT Solutions World Congress, we announced that Azure Sphere will be generally available in February of 2020. General availability will mark our readiness to fulfill our security promise at scale, and to put the power of Microsoft’s expertise to work for our customers every day—by delivering over a decade of ongoing security improvements and OS updates delivered directly to each device.

Since we first introduced Azure Sphere in 2018, the IoT landscape has quickly expanded. Today, there are more connected things than people in the world: 14.2 billion in 2019, according to Gartner, and this number is expected to hit 20 billion by 2020. Although this number appears large, we expect IoT adoption to accelerate to provide connectivity to hundreds of billions of devices. This massive growth will only increase the stakes for devices that are not secured.

Recent research by Bain & Co. lists security as the leading barrier to IoT adoption. In fact, enterprise customers would buy at least 70 percent more IoT devices if a product addresses their concerns about cybersecurity. According to Bain & Co., enterprise executives, with an innate understanding of the risk that connectivity opens their brands and customers to, are willing to pay a 22 percent premium for secured devices.

Azure Sphere’s mission is to empower every organization on the planet to connect and create secured and trustworthy IoT devices. We believe that for innovation to deliver durable value, it must be built on a foundation of security. Our customers need and expect reliable, consistent security that will set innovation free. To deliver on this, we’ve made several strategic investments and partnerships that make it possible to meet our customers wherever they are on their IoT journey.

Delivering silicon choice to enable heterogeneity at the edge

By partnering with silicon leaders, we can combine our expertise in security with their unique capabilities to best serve a diverse set of customer needs.

MediaTek’s MT3620, the first Azure Sphere certified chip produced, is designed to meet the needs of the more traditional MCU space, including Wi-Fi-enabled scenarios. Today, our customers across industries are adopting the MT3620 to design and produce everything from consumer appliances to retail and manufacturing equipment—these chips are also being used to power a series of guardian modules to securely connect and protect mission-critical equipment.

In June, we announced our collaboration with NXP to deliver a new Azure Sphere certified chip. This new chip will be an extension of their popular i.MX 8 high-performance applications processor series and be optimized for performance and power. This will bring greater compute capabilities to our line-up to support advanced workloads, including artificial intelligence (AI), graphics, and richer UI experiences.

Earlier this month, we announced our collaboration with Qualcomm to deliver the first cellular-enabled Azure Sphere chip. With ultra-low-power capabilities this new chip will light up a broad new set of scenarios and give our customers the freedom to securely connect anytime, anywhere.

Streamlining prototyping and production with a diverse hardware ecosystem

Manufacturers are looking for ways to reduce cost, complexity, and time to market when designing new devices and equipment. Azure Sphere development kits from our partners at Seeed Studios and Avnet are designed to streamline the prototyping and planning when building Azure Sphere devices. When you’re ready to shift gears into production mode, there are a variety of modules by partners including AI-Link, USI, and Avnet to help you reduce costs and accelerate production so you can get to market faster.

Adding secured connectivity to existing mission-critical equipment

Many enterprises are looking to unlock new value from existing equipment through connectivity. Guardian modules are designed to help our customers quickly bring their existing investments online without taking on risk and jeopardizing mission-critical equipment. Guardian modules plug into existing physical interfaces on equipment, can be easily deployed with common technical skillsets, and require no device redesign. The deployment is fast, does not require equipment to be replaced before its end of life, and quickly pays for itself. The first guardian modules are available today from Avnet and AI-Link, with more expected soon.

Empowering developers with the right tools

Developers need tools that are as modern as the experiences they aspire to deliver. In September of 2018, we released our SDK preview for Visual Studio. Since then, we’ve continued to iterate rapidly, making it quicker and simpler to develop, deploy, and debug Azure Sphere apps. We also built out a set of samples and solutions on GitHub, providing easy building blocks for developers to get started. And, as we shared recently, we’ll soon have an SDK for Linux and support for Visual Studio Code. By empowering their developers, we help manufacturers bring innovation to market faster.

Creating a secure environment for running an RTOS or bare-metal code

As manufacturers transform MCU-powered devices by adding connectivity, they want to leverage existing code running on an RTOS or bare-metal. Earlier this year, we provided a secured environment for this code by enabling the M4 core processors embedded in the MediaTek MT3620 chip. Code running on these real-time cores is programmed and debugged using Visual Studio. Using these tools, such code can easily be enhanced to send and receive data via the protection of a partner app running on the Azure Sphere OS, and it can be updated seamlessly in the field to add features or to address issues. Now, manufacturers can confidently secure and service their connected devices, while leveraging existing code for real-time processing operations.

Delivering customer success

Deep partnerships with early customers have helped us understand how IoT can be implemented to propel business, and the critical role security plays in protecting their bottom line, brand, and end users. Today, we’re working with hundreds of customers who are planning Azure Sphere deployments, here are a few highlights from across retail, healthcare, and energy:

  • Starbucks—In-store equipment is the backbone of not just commerce, but their entire customer experience. To reduce disruptions and maintain a quality experience, Starbucks is partnering with Microsoft to deploy Azure Sphere across its existing mission-critical equipment in stores globally using guardian modules.
  • Gojo—Gojo Industries, the inventor of PURELL Hand Sanitizer, has been driving innovation to improve hygiene compliance in health organizations. Deploying motion detectors and connected PURELL dispensers in healthcare facilities made it possible to quantify hand cleaning behavior in a way that made it possible to implement better practices. Now, PURELL SMARTLINK Technology is undergoing an upgrade with Azure Sphere to deploy secure and connected dispensers in hospitals.
  • Leoni—Leoni develops cable systems that are central components within critical application fields that manage energy and data for the automotive sector and other industries. To make cable systems safer, more reliable, and smarter, Leoni uses Azure Sphere with integrated sensors to actively monitor cable conditions, creating intelligent and connected cable systems.

Looking forward

We want to empower every organization on the planet to connect and create secure and trustworthy IoT devices. While Azure Sphere leverages deep and extensive Microsoft heritage that spans hardware, software, cloud, and security, IoT is our opportunity to prove we can deliver in a new space. Our work, our collaborations, and our partnerships are evidence of the commitment we’ve made to our customers—to give them the tools and confidence to transform the world with new experiences. As we close in on the milestone achievement of Azure Sphere general availability, we are already focused on how to give our customers greater opportunities to securely shape the future.

The post IoT security will set innovation free: Azure Sphere general availability scheduled for February 2020 appeared first on Microsoft Security.