Author Archive

Perspectives of a former CISO: Disrupted security in digitalization

July 2nd, 2018 No comments

My passion is the connection of security to the business objectives, and it has been a part of my work with many CISOs across industries as well as my experience as a CISO. This blog series a compilation of my learnings as a CISO, as well as learnings from peers and customers who are actively working to figure out how to best align security organizations with their business. This first blog will cover why it is so critical for a security organization to shake off the total compliance mindset and be balanced with a focus closely on aligning to the business of the organization with a clear risk-based approach.

It is not news that the world changed in the last two decades through digital transformation and the requirements for security have also. Initially, it was mainly focused on protecting the network and building virtual walls around the digital assets of a company. The fast evolution of mobile technology, globalization, and digitalization has disrupted standard assumptions for business and they are transforming to adapt, and security needs to be in lock step or better yet – to lead this journey. The world is not what it used to be as it looks more like the graphic image below:

Security must be closely aligned to the business it serves and protects against attacks by the criminal groups working on the Internet. Crime went digital from vandalism to classical crime to nation states. The business, on the other hand, gets disrupted and must change at a speed never seen before. This is the place, where security needs to be.

Security must enable the business transformation and ensure acceptable business risks. This is a non-negotiable truth as securitys sole purpose of existence is to protect the organization that employs it. This is more difficult than it sounds because security started as a purely technical discipline with a common belief that success was achieved in compliance with standards. Many organizations are on the journey of shifting this mindset to a risk-based approach and a deep alignment with their business counterparts. This is a major shift for the security organization as it requires major cultural changes, different priorities, changing of processes and habits, as well as technology changes. I have seen a lot of security people hiding behind their policies instead of helping the business to be successful. This is not solving any problems in todays world.

Regardless of your industry, compliance does not bring security good security brings compliance. Success in security is all about running a reasonable risk management and risk mitigation program, which is leveraged and often even driven by the business leaders, and which clears the way for the business to be successful in a frequently hostile environment.

Chief Security Officers must re-think what they do, re-think the way they look at the world and constantly try to disrupt themselves. I recognize that this is something people in security are typically not good at, as most of us had been taught risk avoidance during our careers (sound familiar?).

Disruptive changes require going against this nature and taking risks where the outcome is uncertain. While this is uncomfortable, it is critically important for our future success.

Looking at it from a more outward view, the CSO has different constituencies to satisfy:

  • Top-Management: The top management wants to understand their key cyber risks, what they need to do with them and whether they invest the right amount in the right location. Key risk means comparable to the other business risks they must deal with. CSOs need to keep this in mind: The CEO has a lot of business risks on his/her table and the Cyber risks have to be aligned with them. Typically as a rule of thumb we might speak of 5-8 risks, where the CSO needs to see action and support by the CEO and the board.
  • Employees: Looking at the employees, security needs to enable them to run their business successfully and with acceptable risks. It is not about security or productivity, we talk of security AND productivity.
  • Customers/partners: Obviously, customers and partners have a certain expectation about what the supplier does with their data and how they protect them. This is not only compliance to data protection regulations, but gaining trust.
  • Regulator: Regulators are heavily challenged by todays situation. Rules which were valid a few years ago, do not apply anymore. New definitions of sovereignty need to be developed. Modern technologies suddenly change the rules of the game as it was known. Most regulators need help and they want to listen to the industry if the discussion happens with mutual respect.
  • Security Community: The security community is often ignored by companies, which can lead to rather dramatic security challenges. Think about what happens if somebody finds a vulnerability in an infrastructure and wants to responsibly disclose this vulnerability to the security organization. How do they find the right people and process? How are they dealt with?

Security needs to be re-thought and certain base assumptions need to be disrupted and re-thought. Progressing digitalization, as well as emerging technologies, will challenge the thoughts again and security functions will be constantly forced to look for new and creative ways to support the business. Our stakeholders are moving fast and so must we. We need to get more in a DevOps approach and align closely with the fast-moving criminal landscape, the fast-moving technology, and the fast-moving business.

For more information

Categories: Uncategorized Tags:

Driving data security is a shared responsibility, here’s how you can protect yourself

June 19th, 2018 No comments

You’re driving a long, dark road on a rainy night. If you’re driving 20 miles over the speed limit and you don’t step on the brakes when the car in front of you comes to a sudden stop, is it your fault or your car manufacturers fault if you rear-end the car that is in front of you?

When we drive, we seamlessly understand that there are some things we depend on the manufacturer to provide (brakes that work, airbags that deploy) and some things we’re responsible for (using the brakes when needed, not turning off the airbag protection).

This is the concept of shared responsibility and was a core topic at this years Cybersecurity Law Institute panel Vendors and Cloud-Based Solutions: How Can All Stakeholders Protect Themselves?

When it comes to cloud computing and data protection, it is a shared responsibility between the cloud service provider (CSP) and the customer that is analogous to the relationship between the car owner and car manufacturer.

While the fundamentals of shared responsibility between drivers and car manufacturers seem relatively straightforward, its not always as clear-cut when analyzing the responsibilities between customers and CSPs for protecting cloud data.

The cloud, as a relatively new architectural model for many organizations, is unique because there are multiple organic models that can shift responsibilities between customers and CSPs. For example, customers can only configure the application layer software in Software as a Service (SaaS) applications. But when moving down the stack to Infrastructure as a Service (IaaS), customers have the responsibility for configuring and managing the servers theyve stood up in the cloud.

While on the Georgetown Law Institute panel in D.C., I explained how Microsoft views the shared responsibility model as a working partnership with customers to ensure they are clear on what we provide and what their responsibilities are across the stack. To be sure, there are some perceptible shifts in responsibility, which is illustrated in the graphic below.

The left-most column shows seven responsibilities that customers should consider when using different cloud service models. The model shows how customers are responsible for ensuring that data and its classification is done correctly and that the solution is compliant with regulatory obligations. Physical security falls to the CSP, and the rest of the responsibilities are shared. Note this a general rule of thumb, and every customer should talk to its CSP to ensure and understand the responsibilities are outlined and meet the organizational needs.

Once a customer has a solid handle on what the CSP is providing, consider the three tips below for managing the shared responsibilities. These could include things like network controls, host infrastructure, end-point protection, application level controls, and access management.

Consult the STARs

The CSA STAR registry consists of three levels of assurance, which cover four unique offerings based on a comprehensive list of cloud control objectives. Here customers can see what controls a provider has attested to. STAR also helps customers assess how different providers are using a harmonized model. Its also important to ask the CSP if it has completed a SOC 2 Type 2. This assessment is based on a mature attest standard, and ensure that evaluation takes place over time rather than at a point in time, among other helpful standards.

(Really!) Read the contracts

Yes, it’s tempting to skip over the long legalese, but the nuances of a contract between a customer and CSP can go a long way in helping each side understand its shared responsibilities. For example, if the contract allows for certain levels of transparency between the two in the form of allowing the customer to see an audit or compliance report. However, you should remember that seeing an overview isnt the same as being able to read every page of the report. A customer should know what level of transparency they’re getting. Customers should be certain there are clear roles and escalation paths that make sense, so if something goes wrong or a decision needs to be made about shutting off a service or reporting a breach, it can be done without hesitation. And don’t forget to engage your own counsel during contact review, no one understands legalese as well as a lawyer.

Follow the guides

To help organizations understand ways to protect their data in the cloud, Microsoft has blueprint guides for use cases like FFIEC and HIPAA regulations. We also have tools to help companies manage and improve their cloud controls, including Compliance manager and Secure score. Compliance manager enables organizations to manage their compliance activities from one place. Secure score is an assessment tool designed to make it easier for organizations to understand their security position in relation to other organizations while also providing advice on what controls they should consider enabling.

Microsoft takes its side of the shared responsibility model seriously and is continually looking for ways to help the customer identify weaknesses and put action plans in place to shore them up. Not unlike how car manufacturers continually iterate to make cars safer, safety enhancements are meant to lessen the burden of driver responsibilities, not remove them entirely. When it comes to protecting data, if you keep your eyes on your data road, well make sure the brakes are working.

For more information on shared responsibilities for cloud computing read this comprehensive white paper.

Categories: Uncategorized Tags:

Cybersecurity Reference Architecture: Security for a Hybrid Enterprise

June 6th, 2018 No comments

The Microsoft Cybersecurity Reference Architecture describes Microsofts cybersecurity capabilities and how they integrate with existing security architectures and capabilities. We recently updated this diagram and wanted to share a little bit about the changes and the document itself to help you better utilize it.

How to use it

We have seen this document used for several purposes by our customers and internal teams (beyond a geeky wall decoration to shock and impress your cubicle neighbors).

  • Starting template for a security architecture – The most common use case we see is that organizations use the document to help define a target state for cybersecurity capabilities. Organizations find this architecture useful because it covers capabilities across the modern enterprise estate that now spans on-premise, mobile devices, many clouds, and IoT / Operational Technology.
  • Comparison reference for security capabilities – We know of several organizations that have marked up a printed copy with what capabilities they already own from various Microsoft license suites (many customers don’t know they own quite a bit of this technology), which ones they already have in place (from Microsoft or partner/3rd party), and which ones are new and could fill a need.
  • Learn about Microsoft capabilities – In presentation mode, each capability has a “ScreenTip” with a short description of each capability + a link to documentation on that capability to learn more.

  • Learn about Microsoft’s integration investments – The architecture includes visuals of key integration points with partner capabilities (e.g. SIEM/Log integration, Security Appliances in Azure, DLP integration, and more) and within our own product capabilities among (e.g. Advanced Threat Protection, Conditional Access, and more).
  • Learn about cybersecurity – We have also heard reports of folks new to cybersecurity using this as a learning tool as they prepare for their first career or a career change.

As you can see, Microsoft has been investing heavily in security for many years to secure our products and services as well as provide the capabilities our customers need to secure their assets. In many ways, this diagram reflects Microsoft massive ongoing investment into cybersecurity research and development, currently over $1 billion annually (not including acquisitions).

What has changed in the reference architecture and why

We made quite a few changes in v2 and wanted to share a few highlights on what’s changed as well as the underlying philosophy of how this document was built.

  • New visual style – The most obvious change for those familiar with the first version is the simplified visual style. While some may miss the “visual assault on the senses” effect from the bold colors in v1, we think this format works better for most people.
  • Interactivity instructions – Many people did not notice that each capability on the architecture has a quick description and link to more information, so we added instructions to call that out (and updated the descriptions themselves).
  • Complementary content – Microsoft has invested in creating cybersecurity reference strategies (success criteria, recommended approaches, how our technology maps to them) as well as prescriptive guidance for addressing top customer challenges like Petya/WannaCrypt, Securing Privileged Access, and Securing Office 365. This content is now easier to find with links at the top of the document.
  • Added section headers for each grouping of technology areas to make it easier to navigate, understand, and discuss as a focus area.
  • Added foundational elements – We added descriptions of some core foundational capabilities that are deeply integrated into how we secure our cloud services and build our cybersecurity capabilities that have been added to the bottom. These include:

    • Trust Center – This is where describe how we secure our cloud and includes links to various compliance documents such as 3rd party auditor reports.
    • Compliance Manager is a powerful (new) capability to help you report on your compliance status for Azure, Office 365, and Dynamics 365 for General Data Protection Regulation (GDPR), NIST 800-53 and 800-171, ISO 27001 and 27018, and others.
    • Intelligent Security Graph is Microsoft threat intelligence system that we use to protect our cloud, our IT environment, and our customers. The graph is composed of trillions of signals, advanced analytics, and teams of experts hunting for malicious activities and is integrated into our threat detection and response capabilities.
    • Security Development Lifecycle (SDL) is foundational to how we develop software at Microsoft and has been published to help you secure your applications. Because of our early and deep commitment to secure development, we were able to quickly conform to ISO 27034 after it was released.

  • Moved Devices/Clients together – As device form factors and operating systems continue to expand and evolve, we are seeing security organizations view devices through the lens of trustworthiness/integrity vs. any other attribute.

    • We reorganized the Windows 10 and Windows Defender ATP capabilities around outcomes vs. feature names for clarity.
    • We also reorganized windows security icons and text to reflect that Windows Defender ATP describes all the platform capabilities working together to prevent, detect, and (automatically) respond and recover to attacks. We added icons to show the cross-platform support for Endpoint Detection and Response (EDR) capabilities that now extend across Windows 10, Windows 7/8.1, Windows Server, Mac OS, Linux, iOS, and Android platforms.
    • We faded the intranet border around these devices because of the ongoing success of phishing, watering hole, and other techniques that have weakened the network boundary.

  • Updated SOC section – We moved several capabilities from their previous locations around the architecture into the Security Operations Center (SOC) as this is where they are primarily used. This move enabled us to show a clearer vision of a modern SOC that can monitor and protect the hybrid of everything estate. We also added the Graph Security API (in public preview) as this API is designed to help you integrate existing SOC components and Microsoft capabilities.
  • Simplified server/datacenter view – We simplified the datacenter section to recover the space being taken up by duplicate server icons. We retained the visual of extranets and intranets spanning on-premises datacenters and multiple cloud provider(s). Organizations see Infrastructure as a Service (IaaS) cloud providers as another datacenter for the intranet generation of applications, though they find Azure is much easier to manage and secure than physical datacenters. We also added Azure Stack capability that allows customers to securely operate Azure services in their datacenter.
  • New IoT/OT section – IoT is on the rise on many enterprises due to digital transformation initiatives. While the attacks and defenses for this area are still evolving quickly, Microsoft continues to invest deeply to provide security for existing and new deployments of Internet of Things (IoT) and Operational Technology (OT). Microsoft has announced $5 billion of investment over the next four years for IoT and has also recently announced an end to end certification for a secure IoT platform from MCU to the cloud called Azure Sphere.
  • Updated Azure Security Center – Azure Security Center grew to protect Windows and Linux operating system across Azure, on-premises datacenters, and other IaaS providers. Security Center has also added powerful new features like Just in Time access to VMs and applied machine learning to creating application whitelisting rules and North-South Network Security Group (NSG) network rules.
  • Added Azure capabilities including Azure Policy, Confidential Computing, and the new DDoS protection options.
  • Added Azure AD B2B and B2C – Many Security departments have found these capabilities useful in reducing risk by moving partner and customer accounts out of enterprise identity systems to leverage existing enterprise and consumer identity providers.
  • Added information protection capabilities for Office 365 as well as SQL Information Protection (preview).
  • Updated integration points – Microsoft invests heavily to integrate our capabilities together as well as to ensure use our technology with your existing security capabilities. This is a quick summary of some key integration points depicted in the reference architecture:

    • Conditional Access connecting info protection and threat protection with identity to ensure that authentications are coming from a secure/compliant device before accessing sensitive data.
    • Advanced Threat Protection integration across our SOC capabilities to streamline detection and response processes across Devices, Office 365, Azure, SaaS applications, and on Premises Active Directory.
    • Azure Information Protection discovering and protecting data on SaaS applications via Cloud App Security.
    • Data Loss Protection (DLP) integration with Cloud App Security to leverage existing DLP engines and with Azure Information Protection to consume labels on sensitive data.
    • Alert and Log Integration across Microsoft capabilities to help integrate with existing Security Information and Event Management (SIEM) solution investments.


We are always trying to improve everything we do at Microsoft and we need your feedback to do it! You can contact the primary author (Mark Simos) directly on LinkedIn with any feedback on how to improve it or how you use it, how it helps you, or any other thoughts you have.


Categories: Uncategorized Tags:

From the ground up to the cloud: Microsoft’s Intelligent Security supporting CISOs’ cloud transformation

May 30th, 2018 No comments

Its no secret that Microsoft has embraced the cloud in a big wayfrom enterprise solutions like Microsoft Azure to Office 365 and Windows. But a recent research report by Forrester focuses on an equally important shift in our approach to securityintegrating workforce and cloud security in ways that make them much easier for enterprise IT leaders to purchase and manage.

As Dark Readings Kelly Sheridan points out, Microsoft is focusing on bringing protections to where people are moving their work: into the cloud. Microsoft, like other cloud providers, is stomping into the security market, ready to shake things up and address the weaknesses they see in todays tools, she adds.

Sheridan cites the Forrester research that includes a focus on Microsofts plan to build security into each part of Azure, Office 365, and Windowsa strategy, which, the researchers say, will be as disruptive to the security space as the cloud has been for the enterprise.

A shifting security challenge

Our emphasis on integration through and into the cloud, mirrors the shifts CISOs have made as their own enterprises have embraced the cloud, requiring them to coordinate cloud and on-premise security solutions.

As summarized succinctly in Sheridans Dark Reading post, Forresters research highlights Microsofts strengths in telemetry and artificial intelligence, which yield unparalleled insights into how attackers interact with not only our products, but also other applications that run on Windows and other Microsoft platforms.

The report also cites Microsofts efforts to target the enterprise market by making security easy to buy and use, Sheridan writes. Microsoft, she adds, bundles technologies and simplifies deployment for security teams, which can use preconfigured security policies for new servers and containers. From C-level execs to Sec-Ops, our customers tell us they are overwhelmed by the rapid pace at which new cyber threats are released in the wild. Microsoft believes there is a need for the industry to shift to this next generation of security defense.

Scale and integration

The Forrester report also notes that embedded solutions address one of the biggest challenges cloud-focused CISOs face: scalability.

Scalability isnt an issue, Sheridan writes. As infrastructure and applications grow, so do cloud platforms. Teams dont need to worry about whether hardware can handle bandwidth upgrades or whether management servers can handle new endpoints.

And we continue to expand the scope of our security offerings. At Aprils RSA Conference 2018, Microsoft made a series of announcements that deepen our commitment to end-to-end security. Azure Sphere brings our security efforts to the connected microcontroller units (MCUs) that make up the Internet of Things (IoT), while a broad suite of technologies that together we call Microsoft 365 Intelligent Security emphasizes our commitment to integration. Intelligent Security offerings announced at RSA include a new API for Microsoft Graph that provides integrated data and alert reporting across security products, our Secure Score and Attack Simulator to help companies assess their security profiles, and additional support for strong authentication and threat prevention capabilities.

Disruption and drive

These shifts, according to Forrester, represent a significant disruption in the security marketplace. For CISOs, they also provide a strong argument for new thinking about security products in the enterprise and significant cost savings for moving to the cloud having security built in from the ground up. Gone are the days when security leaders opted for separate antivirus tools in lieu of Windows Defender, Sheridan writes. Now many question the business choice to buy an endpoint suite when Microsofts services have security built in.

At the same time, Forresters researchers caution against going all in with any single vendor, and we agree. Cybersecurity is a broad and complex space, and no one vendor can do it all. Thats why were working with microcontroller unit manufacturers on our IoT solutions, participating in a cybersecurity technology agreement with nearly 35 companies across the industry, and acquiring best-of-breed technology from innovative companies like Adallom and Aorato to bolster our capabilities in such areas as cloud security and malware detection. And along with the millions of threat indicators identified by the Intelligent Security Graph API, we work with a wide range of organizations to gather and share intelligence on threat attacks in real time. Together, these moves represent our commitment to work with all partners to secure the enterprisea simple but powerful idea that, in the security space, may ultimately become the most disruptive force of all.

Categories: Uncategorized Tags:

Here is Homeland Security, black swans, and thwarted cyberattacks

May 9th, 2018 No comments

Last week, I had the honor of addressing The Homeland Security Training Institute (HSTI) at the College of DuPage as part of the HSTI Live educational series. The event featured other prominent speakers at the forefront of cybersecurity defense, including:

Dave Tyson, CEO of CISO Insights, a global cybersecurity consultant and Nicole Darden Ford, Vice President and Chief Information Security Officer of Baxter Healthcare. Dave broke down complex cybersecurity issues making them relatable to the audience, a skill hes also honed through his other business venture, CEO of Nicole shared her firsthand experiences dealing with the challenges and the successes of a modern CISO in the healthcare industry. Nicole has global responsibility for information security as well as information technology quality compliance and information governance.

I presented findings from the most recent Microsoft Security Intelligence Report v23, diving into themes and specifics behind old and new malware propagated through massive botnets, and phishing, and ransomware attacks. And, importantly, providing advice and guidance on steps organizations can take to help protect themselves and their critical assets.

It was a great set of talks that spawned a lot of interesting dialogs. After the event, I was stopped by someone who asked me why our cyber defenses arent sophisticated enough to stop all cyberattacks before they penetrate our systems. Its a fair question, especially when you consider the substantial amount of annual investment organizations make in hardware, software, and human capital. For example, its not uncommon for regulated and larger businesses to have teams dedicated to 24/7, 365 surveillance and monitoring of their systems. Yet, the bad guys still get in, plant malware, compromise proprietary information, and reveal sensitive customer data.

As I thought more deeply about the question of why we cant stop all attacks, I was reminded of Nassim Nicholas Talebs seminal book The Black Swan: The Impact of the Highly Improbable. Taleb dives into how some negative events, no matter how improbable they are, can cause massive consequences. This is within cybersecurity also as demonstrated by for example WannaCry. The attack cost organizations across the globe billions of dollars and made headlines for weeks! Yet far fewer people have heard of Bad Rabbit, largely because it was identified and stopped by Windows Defender Anti Virus in 14 minutes before it caused widespread damage. Catching new malware isnt easy, but using layered machine learning from device to cloud and sharing that learning across systems rapidly is helping to find and catch new malware more quickly. With Bad Rabbit, after the first device encounter, the cloud protection service used detonation-based malware classification to block the file and protect subsequent users who downloaded the dangerous file.

Another example of rapid intelligent response spoiling a massive attack comes from March of this year. The malware, named Dofoil was a cryptocurrency miner that exhibited advanced cross-process injection techniques, persistence mechanisms, and evasion methods. Windows Defender AV picked up on behavior-based signals to identify the infection attempts and block more than 80,000 instances of the attack within milliseconds.

What is often overlooked or unseen in all the headlines, is that most of our cyber defenses are deeply effective, especially when you consider the sheer number of attacks enterprises face every day. Its easy to lose sight of this when a devastating attack occurs and controls the news narrative. Microsoft threat data shared from partners, researchers, and law enforcement worldwide gives a clearer picture of the massive scale of data were regularly protecting. In a month, using the Intelligent Security Graph, were analyzing 400 billion emails, scanning 1.2 billion devices and 18 billion Bing web pages while detecting 5 billion threats. Again, Im not suggesting we catch everything malicious as billions of pieces of data and hardware are scanned. We dont. Some malware inevitably gets through our protective layers. But when you consider the scale of attacks, and the prominence of digital products and tools in enterprises, its important to remember that we as an industry of cybersecurity professionals very often get it right. Users all over the world are accustomed to switching on their devices and safely opening hundreds of emails a day, seeing the correct balance in their mobile banking app, and trusting their GPS to accurately guide them from point A to point B. Our digital lives are deeply intertwined with our personal and work lives, and more often than not, they coexist in harmony.

In sum, its true; the cybersecurity industry cannot claim the ability to stop all cyberattacks. But lets not overlook all of the attacks that are detected and prevented every day. The hardworking cybersecurity professionals, the same ones I shared the stage with at The Homeland Security Training Institute at the College of DuPage, are advancing our capabilities to thwart cybercrimes every day. Yes, weve got work to do, this is an ongoing battle, but the wins and ongoing work deserve to be recognized too.

Categories: Uncategorized Tags:

Overwhelmed by overchoice at RSA Conference 2018

April 25th, 2018 No comments

As over 500 companies vied for mindshare at this years RSA conference – a cacophony of vendors pitching thousands of products from brightly colored booths – it reminded me of how challenging it was for me to separate signal from noise when I was managing global networks. And the rapid growth of vendors and solutions in the past few years makes me wonder how overwhelming the choice must seem for CISOs today.

This challenge extends well beyond the show floor of RSA. Security Operations Center (SOC) analysts parse through thousands to even millions of alerts per day working as quickly as possible to investigate them and determine which ones represent real threats. Enterprises need tools that can help them identify and contain threats quickly, but the SOC analyst dilemma of too many alerts is echoed on the show floor. There are just too many vendor and solution choices to pick from. This phenomenon known as overchoice leads to paralysis, obstructing our ability make confident choices and seek timely guidance. Psychologists have long studied this construct and found that along with paralysis, the presence of too many options can even push people into decisions that work against their best interests.

As more than 50,000 RSA attendees worked their way across the conference center floor, I watched as they encountered an endless array of ever-changing acronyms, software, and hardware to address problems they probably didnt even know they had. In the quest to create and name the next generation of most innovative solutions, new categories and acronyms abound from SIM to SEM to SOAR, and AV to EPP to EDR. Unfortunately, these new solutions can come so fast that the features may fuzz into buzzword bingo for attendees. With IoT and the intelligent edge, there are new security scenarios for enterprises to solve for. With that come new categories of security, and new offerings flood the market. Enterprise professionals are left fighting an uphill battle across a foggy landscape.

There is a way to address all this complexity. It starts with you and your enterprise. As the person who knows your enterprise best, you are positioned to drive the decision-making process based on real-world scenarios and everyday learnings.

Vendors often try to identify problems, solve them, and hope someone needs the solutions. But every enterprise is unique, and not all threats are prioritized evenly across the board. If CISOs can assess enterprise-wide learnings and lean on the vendors to interpret and understand real-world issues, a more coherent strategy and product should emerge.

Of course, its not always easy for enterprise CISOs to understand and prioritize their needs. If this is the case in your enterprise, third-party consultants can help assess your current security posture and forge an action plan for optimization. Once a plan is created, the buyer should drive the process and avoid unnecessary distractions that lead to evaluating dozens of options and trying to understand where the puzzle pieces fit together. CISOs can also lean on the vendor to help interpret and understand the enterprises defined needs once they understand their needs and have prioritized them.

To better facilitate this approach, first ask, “What is the business problem Im trying to solve? For example: Retail organizations may want to enhance their online store to include customer intelligence to provide a better customer experience. What type of privacy security will be required to do this? Will there be compliance requirements to do this? If general themes emerge rather than more nuanced security gaps, CISOs can use a known framework, like the NIST Cybersecurity Framework. Its a useful tool for managing cybersecurity outcomes, and it covers all the verticals of cybersecurity, making it easier to adopt and join with other frameworks you might also need to incorporate in your security program.

Once you have a solid grasp of the enterprise security requirements, start to look for solutions that specifically meet those needs. Once the business problems are identified and the researching of solutions takes place, youll bump into those pervasive acronyms again. Dont get sucked in – resist the urge to solve for every potential problem vendors are trying to solve for. Focus on the vendors whose solutions specifically address your enterprises problems and meet your requirements. Ask your peers for their own firsthand experience. Ask them which solutions have or haven’t worked for them. You can even ask vendors for references to speak with.

Once promising vendor solutions emerge, confirm that the solution will solve your enterprises problem. Get proof that it will – which doesnt necessarily equate to knowing every single mathematical detail about the algorithms used in a solutions ML engine or reviewing each line of code. But it does mean seeing the solution in action. Demo and test-drive it, preferably in your own environment. This approach is about the buyer driving the process, and staying engaged. Like most things related to our safety and security, the more engagement, the better the outcome.

These are active times in cybersecurity. The great news is a lot of innovative, smart, and motivated companies are working hard to build intelligent solutions to thwart cyberattacks. But were all at risk of paralysis from overchoice. Stay on target by focusing on your business problems and needs, and demand that vendors cut through the buzz to focus on proving they can deliver results. See what Microsoft presented and our latest security innovations at the RSA Conference.

Categories: RSA Conference Tags:

Tapping the intelligent cloud to make security better and easier

April 16th, 2018 No comments

There has been a distinct shift in my conversations with customers over the last year. Most have gone from asking can we still keep our assets secure as we adopt cloud services?, to declaring, we are adopting cloud services in order to improve our security posture. The driving factor is generally a realization that a cloud services provider can invest more in security, do the job better, and just make life simpler for overburdened enterprise IT and SecOps teams. This idea of making sound security practices easier to implement is a big part of our strategy. Today were announcing several new technologies and programs that build on our unique cloud and intelligence capabilities to make it easier for enterprises to secure their assets from the cloud to the edge.

The first step in protecting people and data from todays dynamic threat landscape is accepting reality. Its time for us as an industry to recognize that the cloud holds so much promise for helping us solve security problems that we should consider the use of cloud-based intelligence a security imperativenot just for workloads deployed in the cloud, but also for improving security of endpoints.

We recently released the 23rd edition of our Security Intelligence Report. The trends it uncovers helps us see why the cloud is becoming a security imperative. Threats are increasingly automated and destructive. No one organization can amass the resources and intelligence to defend against these fast-moving threats. We have to tap into the power of the cloud, and of artificial intelligence, in order to muster the defenses required.

One of the most powerful examples of cloud-based artificial intelligence accelerating Microsofts own security innovation is the Microsoft Intelligent Security Graph. Our Intelligent Security Graph uses advanced analytics to link threat intelligence and security signals from Microsoft and partners and continues to increase in variety and volume of signal. For example, we see the threat landscape through the lens of the 18 billion web pages that Bing scans, the 400 billion emails that are analyzed for spam and malware, and the 5 billion distinct malicious threats that Windows Defender ATP protects our customers against each month.

Artificial intelligence gets better as it is trained with more signal from more diverse sources. Today, we are announcing the preview of a new unified security API in the Microsoft Graph, which allows our technology partners to easily integrate with Microsoft solutions and tap into the power of the Intelligent Security Graph.

The Intelligent Security Graph comes to life through our platform investments, where it connects our security solutions to improve protection, detection, and response. Microsoft invests more than $1 billion in cybersecurity R&D annually, to build new security innovations into Windows, Azure, and Microsoft 365. Today we are announcing new capabilities to help our customers improve their protection against threats and, when attacked, detect and respond more quickly. We are working with partners across the industry to better integrate solutions for our customers.

Improving protection

A fundamental concern for many IT teams is the struggle to know the true security posture of the organization: are all the necessary controls in place? Have all updates been applied? Is everything configured correctly? More importantly, its hard to know what the next steps should be to improve security. Today we are announcing the availability of Microsoft Secure Score, which gives the IT administrator a combined view of security readiness across a broad swath of the digital estatefrom Office 365 services to endpoint devices.

To get around properly configured protection, attackers often focus on deceiving end users with phishing and social engineering techniques. We have made a number of advances in our Office 365 ATP anti-phishing protection recently, and now we are adding Attack Simulator for Office 365 Threat Intelligence in Microsoft 365, so IT teams can train users to guard against phishing.

Information is the beating heart of any company, and the target of most attacks. It’s also a regulatory focus, especially with the new EU GDPR enforcement date rapidly approaching. In February, we announced a set of Microsoft 365 updates to help our customers manage compliance and protect information. As we near the GDPR enforcement date, today we are announcing several new tools and capabilities that help you respond to GDPR obligations with the Microsoft Cloud. Read more about them later today on the Office 365 blog.

Speeding up detection and response

Of course, no protection strategy can be 100% effective. Savvy customers are improving their detection and response capabilities to prepare for the inevitable breach. The Conditional Access capability built into Microsoft 365 has helped many of our customers dramatically improve their protection for tens of millions of employees, by assessing the risk of each request for access to a system, an application, or data, in real time. That risk level informs how much access is granted, according to policy set by IT.

We are extending Conditional Access to factor in post-breach response. New conditions based on continual assessment of endpoint healthnot just a one-time check of configurationenable our customers to restrict or deny access to resources if the device from which the request originates has been compromised by an attack. This new capability is in preview and will be generally available in the next Windows 10 update. Rapid detection and recovery remain out of reach for many of our customers because the specialized skills required to hunt down and eliminate adversaries are in high demand but short supply. To help IT focus its strained resources on the most important issues, we are announcing the general availability of automated remediation as part of Windows Defender ATP in the next Windows 10 update. With this new capability, Windows Defender ATP can automatically apply common remediations, freeing up the experts to work on more difficult recovery tasks.

Our work on detection and response extends to Microsoft Azure as well. As our customers embrace the cloud, Azure Security Center is a key tool that helps them simplify hybrid cloud security and keep pace with ever-evolving threats. Several new capabilities will be available with Security Center this week that help to identify and mitigate vulnerabilities proactively and detect new threats quickly. With the integration of Windows Defender ATP in preview, customers can get all the benefits of advanced threat protection for Windows servers in Azure Security Center.

Working across the industry

Customers who use Microsoft 365 have been taking advantage of increasingly robust tools to protect Office documents and e-mails wherever they go inside and outside the organization. Today we are extending these capabilities to our technology partners with the release of the Azure Information Protection SDK.

The benefits we can all gain from applying cloud intelligence to security problems are tremendous, but can only be fully realized if we work together across the industry. Nearly every customer I speak to has a dozen or more different security solutions in place. Each of those solutions plays a critical role in protecting the organizationand each has valuable contextual information that would help make the others more effective at protecting customers. Today we are announcing the Microsoft Intelligent Security Association, a group of technology providers who have integrated their solutions with Microsoft products to provide customers better protection, detection, and response. Anomali, Check Point, Forcepoint, Palo Alto Networks, and Ziften are among the solution providers working with us. Together, we can bring more signals from more sources to bearwhich helps our customers detect and respond to threats faster.

We also continue to work with a broad coalition of technology partners in the FIDO Alliance to address one of the most fundamental issues in security today: Identity and access management. Our analysis indicates that cloud-based user account attacks are up more than 300% over the past year. Passwords are the weakest link, and they are a source of frustration for users. Today we are announcing an important step in our work to lead the industry toward a future without passwords: support for the FIDO 2.0 standard in the next Windows 10 update. Millions of Windows 10 users already have the ability to sign in to Windows without a password using Windows Hello making authentication stronger and easier. With FIDO 2.0 support, users can take that same password-free authentication experience to any Windows 10 device managed by their organization.

The evolution of the intelligent edge

At Microsoft, we believe the intelligent cloud and intelligent edge will shape the next phase of innovation. The rise of Internet of Things deployments amplifies security challenges, because many devices lack the tools to manage updates or detect and respond to attacks. Building on research done by Microsoft AI and Research, and on decades of Microsoft experience and expertise in silicon, software, and cloud security, today we are announcing the preview of Azure Sphere. Azure Sphere extends our reach to the outer regions of the intelligent edge, enabling us to serve and secure an entirely new category of devices — the billions of MCU powered devices that are built and deployed each year.

Its an exciting time to be working in security. We are joining forces with other security solution providers and using the cloud to our customers advantage. Together, we can turn the tide against attackers. We are at the RSA Conference this week, and looking forward to discussing these new capabilities with you. Visit to learn where you can find us.


Categories: RSA Conference Tags:

Investing in the right innovation

April 10th, 2018 No comments

RSA is around the corner which means tens of thousands of people will descend on Moscone Center in San Francisco, CA. Hundreds of innovative young companies will look for customers, props, and capital (especially at the Early Stage Expo!). Venture capitalists will look for opportunities to invest and find the next $1B IPO. Larger companies may well search for IP to complement larger platforms. CISOs will show up looking for solutions to todays problems, with an eye toward tomorrows, and ask two key questions: What in this expo hall will help me better protect my company? And, what can I take OUT of my portfolio in exchange?

Considering this, I contacted several VC and tech sector colleagues to test an assertion in my most recent blog, which stated that perhaps the kind of innovation were likely to see at RSA can offer too much of a good thing when it comes to CISOs priorities. Is the market ready for all this innovation? Are there enough dollars available? Is the innovation meeting CISOs real needs?

Looking at the exhibitor list, and searching by core topic, its going to be exciting, yet challenging, to determine which companies are truly innovating and competitive in these crowded marketplaces. A quick look also tells us where most of the attention is, and where it isnt. The Analytics, Intelligence and Response, and Machine Learning categories turn up hundreds of companies, as expected due to all the financial support into, and buzz around, these fields. We should expect to see many claims of best-in-class cyber defense products. However, I suspect there is growing skepticism about vendors claims to have the best ML/AI-driven 0-day finder. I encourage vendors to be prepared to articulate the real true capabilities of the ML and AI engines that drive your solutions: By what standards can we evaluate the strength of algorithms and engines? Can they scale, integrate into, and play nice with a customers existing toolset? No doubt, ML and AI will continue to improve and become more central to security, but early innovations here have probably created what one contact called a swarm effect that has promoted the rise of duplicative technologies. Vendors should also be aware there are probably too many companies chasing too few CISO dollars, and there is bound to be consolidation. On the investor side, I suspect ML/AI fatigue is setting in. A few VCs have said theyre pretty much done putting money into this area until it shakes out.

Perhaps CISOs can nudge the security and investor communities into using ML and AI to develop more foundational preventative solutions. These might include more secure-by-design hardware and software architectures, self-aware and self-healing systems, and smart-configuration and smart-patching solutions. One CTO colleague relayed that hes seen excellent presentations and proposals on self-healing computational models and systems, but unfortunately few VC-funded companies are moving beyond research into development and commercialization, partly because so much attention is on APT-hunting shiny objects. Until the community is incentivized to move into these areas, the current assume breach detect-and-respond model will dominate how cybersecurity is practiced and commercialized.

As another example, look at blockchain and cryptocurrency, two leading-edge investment areas. Is commensurate work being done to update the underlying cryptographic algorithms and protocols that date back to 1982? Quantum-resilient crypto and homomorphic encryption technologies are areas that probably havent received the level of financial support they deserve, outside of DARPA or other government programs.

Getting back to CISOs priorities, the consistent theme was how to make the best use of people and existing tools:

  • Training: This CBT/CET Gartner market will reach $7.2B by 2019. We know that were facing a shortage of up to 2M qualified cyber professionals. Unfortunately, this years conference doesnt seem to reflect the market opportunity or interest in addressing such core challenges. I queried the Human Element and Professional Development topics in the RSA exhibitor list and turned up only 57 and 19 companies, respectively, with booth presence this year. I hope at least their booths are crowded and that they succeed. We need more innovation in people. Machines will have to do more and more of the work but in the end, people deploy, monitor, and interact with the technology that is protecting their systems. We must be more innovative in how we train people and encourage others to join the field. The better we can train personnel to more effectively monitor and improve the performance of their cyber systems, the more we can create a virtuous loop that combines trained people continuously optimizing the abilities of the machines that will be required to handle more of the configuration, deployment, monitoring, detection, and remediation workloads.
  • ROI: We need to invest more in tools that help CISOs use their existing tools better. One VC colleague pointed to a recent investment his firm made in a company whose solution measures the effectiveness of third-party security tool implementation. Whos watching the watchers? IMHO, a very clever example of the type of virtuous cyber loop we could create. Another VC contact uses the analogy of the industry delivering too many cyber drugs to treat the same symptoms; what his firm wants to see is investment in more doctors and nurses to more effectively administer the treatment, get to root cause, and save the patient.

I support many public sector CISO teams in the US and Europe. What do I think theyll be looking for at RSA? With an eye on ML/AI innovation, I think theyll be just as interested in tools that offer improvements to the messy hygiene work of security: automated and self-learning configuration, inventory analysis and update management tools, and for anything that helps their people improve how they manage their responsibilities. Given uncertain budgeting and the continual need to maintain and adhere to compliance mandates, theyll also look for solutions that help improve and speed up the path to staying as green as possible on a scorecard. Perhaps the excitement around advanced sciences and big data will dominate the RSA agenda, but I expect and encourage CISOs to push innovators for solutions that get to the core of their day to day challenges.

If youre an investor, or if youre an innovator looking for what could be next years breakout opportunity, think about investing in the people who will deliver on your goals.

Categories: Uncategorized Tags:

Announcing: new British Standard for cyber risk and resilience

April 4th, 2018 No comments

Technology is an integral part of the fabric of everyday life. There is almost no organization that does not rely on digital services in some way in order to survive. The opportunity that technology provides also brings with it more vulnerabilities and threats as organizations and data become more connected and available. This trend results in a common gap found in the decision-making process at large organizations. Often information security and cybersecurity have been viewed as a function of IT and therefore, the information security departments have been managed outside of normal business decision-making processes. This is an approach we no longer have the luxury of indulging.

Organizations need a holistic approach to implement digital transformation projects to safeguard their security. This involves focusing on both the opportunity and the threat of any change. To do this effectively the accountability for cyber risk and resilience needs to sit firmly with executive management and the governing body. However, a skills gap exists at this level with many governing body members having started their careers before the internet era. Even when willing to adopt responsibility for building a cyber resilient organization, senior executives are often confused by the technical language that risk management and cybersecurity professionals speak. As well, they may also encourage cybersecurity professionals to speak directly to the board. Therefore, we also need to equip board members with the tools to ask the right questions and ensure the correct levels of risk to build cyber resilient organizations.

That is why, nearly two years ago, the BSI Risk Management Committee started working to develop new guidance aimed at helping executive leadership better understand and manage the technology risks to their organizations. I was asked to lead a group of government executives, regulators, professional bodies and technical experts with a goal of directly addressing the realities and challenges of managing cyber risk in a digital world. This goal led us to draft the new British Standard BS31111. The standard aims to provide guidance to enterprise organizations regarding cyber risk and resilience, and to address the gap in IT decision making.

The standard includes:

  1. Parameters to build concrete guidelines into governing bodies
  2. Identification of areas of focus an organization should have in order to build a cyber resilient enterprise
  3. Assessment questions management can ask to challenge the organization regarding how it is building cyber resilience into the business

Cyber risk and resilience needs to be driven from the top of the organization to ensure that the right culture is set across all business decision making. Executive management must ensure that there is a clear risk and resilience strategy set across the organization, as well as ensuring that there is a strong management structure in place that details the responsibilities and expectations of everyone to maintain security. As Microsofts own CEO Satya Nadella has said, Cybersecurity is like going to the gym. You cant get better by watching others, youve got to get there every day. Satyas comments underline the reasoning behind this standard, emphasizing the need to build cyber resilience into day to day operations and not treat it as a standalone project or program.

Engaging with risk management and cyber resilience principles can be complicated and it is easy to get bogged down by technical jargon. To help, we created a visual (figure 1) intended to illustrate the areas required to develop cyber resilience and the key responsibilities of the board.

Source:BS3111:2018 Figure 1

Key tenets:

  • The responsibility of any Board of Directors is to clearly set the direction of business activity. They ultimately sign off on major decisions and investments and need to ensure that activity is sustainable for the business.
  • Executive management and the governing body are mostly responsible for the roof and foundation, with oversight on the activity of the pillars. Any building is only as good as its foundation and the same is true for building cyber resilience.

The importance of culture for security

Without a strong culture of security, it is easy for decisions to be made that expose an organization. Many of the major breaches witnessed in recent years can be traced back to a lack of ownership and leadership regarding the need for strong cybersecurity measures across the organization, along with ill-informed investment decisions. The executive management and members of the board need to clearly focus on the benefits of any digital investment AND the level of security outcomes required to support that investment. Hopefully, the new British Standard BS31111 will provide best practice aims and expectations for the responsibility and accountability of boards and executive leadership to drive change.

The publication of the standard is only the first step. It will be important to promote the need for every organization to safeguard their enterprise and their customers, more than we do today. Many boards and governing bodies are becoming more cyber aware and understanding their need to build cyber risk into their decision making. This publication aims to enable leadership teams and boards to build awareness and decision-making protocols across the organization.

In my short tenure with Microsoft, I have already witnessed a strong internal security culture, focused on building resilient and secure cloud platforms. I look forward to working with my customers to help them develop their own cyber resilient foundations and cultures, ensuring that Microsofts capabilities support them in that endeavor.

Sin serves as Executive Security Advisor for the UK at Microsoft and has worked in the Information Security industry for over 20 years. Sin is a highly requested public speaker and has regularly been on national radio and television including the BBC and Sky News talking about security issues. Sin was appointed an MBE by the Queen in the New Years Honours List for 2018 for services to Cyber Security.

Categories: Uncategorized Tags:

Working towards a more diverse future in security

March 28th, 2018 No comments

Last year I embarked on an exercise to examine diversity in cybersecurity. As one full year has passed, I decided to revisit this topic and the ongoing challenges of recruiting AND retaining diverse talent in the cybersecurity field. This past year saw the #MeToo movement in the spotlight, and while womens issues were brought to the forefront, there are still opportunities to improve. I want to share new learnings based on my experiences this year and as an update to my earlier post, How to solve the diversity problem in security.

Two personal interactions that are top of mind reinforced my belief there is much work to be done. If you follow me on Twitter (@ajohnsocyber) I commented on both at the time they occurred. In one instance, I was interviewing a candidate for a role in my organization. We were discussing MFA, and he felt the need to stop me, educate and inform me of the error of my thinking. I dont claim to be a subject matter expert about all topics related to cybersecurityno one could bebut I know a fair bit about MFA. His dismissive tone and attitude certainly did not set the right tone of an interview. The second incident occurred whilst I was presenting to a large group of customers. A male colleague interrupted me to say, What she meant to say was. Actually, what I meant to say was exactly what I said but thank you for that moment of classic mansplaining. You see, no matter your rank, role, position or expertise, there are still those who choose to minimize your knowledge, expertise or experience. While I cannot definitively say these two incidents occurred because I am a woman, I can tell you the candidate feedback from male interviewees was not the same, and the man in question did not interrupt male speakers at the same event where he interrupted me.

So, as I revisit this blog post for 2018, I also want to highlight some really positive events of the past 12 months. Microsoft believes in diversity 365 days a year, and we demonstrate it with solid actions. I am inspired not only by the women leaders in our organization, but also by our strong male allies who advocate for recruiting and promoting diverse talent. We simply cannot accomplish this work without the support of male allies. I am fortunate, at Microsoft, to actively and frequently work with a large group of well-known security professionals including many talented women. I look forward to meeting and working with many more who are surely part of this company now or who will be compelled to join. We continue to invest in talent that challenges the way we think, talent that changes the organization, talent that truly embraces the learn it all, not know it all culture our CEO Satya Nadella has built.

So, whilst as an industry we have a long road ahead of us to fully embrace diversity, we have planted the seeds. In my thirty years in tech, I have never felt the energy or seen this level of commitment and passion toward inclusion. I am proud to be part of the solution and fully committed to helping steer the ship.

Categories: Uncategorized Tags:

Filling the gaps in international law is essential to making cyberspace a safer place

March 27th, 2018 No comments

A month ago, on the sidelines of the Munich Security Conference, Microsoft organized an expert workshop to discuss gaps in international law as it applies to cyberspace. We were fortunate enough to bring together twenty leading stakeholders, including international legal experts, United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (UNGGE) delegates, diplomats, and non-governmental organizations (NGOs). Together, we looked at the current situation in cybersecurity norms and international law, and we discussed possible paths forward. What emerged was a significant consensus on both the need to restructure cybersecurity discussions globally and the necessity of implementing the 2015 UNGGE report.

Gaps in international law were the focus for discussion and, although there were several areas of concern that were identified on the basis of recent cyberattacks, the most significant challenge was seen as being structural: the lack of an international organization or other venue for addressing the cyber threat landscape of today and tomorrow.

The challenge of the cyber threat landscape is not simply that it is always evolving, nor that it is continually extending its reach into the day-to-day existence of citizens, businesses, and governments. The greatest challenge is that when it comes to dealing with cyber threats the world currently lacks:

  • A place where victims of nation-state or state-sponsored cyberattacks are able to go to get help after an incident has occurred;
  • A standing body or registry that enables ongoing learning about the known threats to people and infrastructure, as well as their corresponding responses;
  • A common basis for judging not just if international law has been violated but how;
  • A consistent basis for the use of international law in prevention of cyberattacks and for enforcement of law following such attacks.

In other words, the world lacks a common space for finding out the facts about cyberattacks, for learning from others, for interpreting laws and for agreeing who did what to whom. That last point, the attribution of responsibility for cyberattacks, fundamentally underpins the concept of applying international law to cyberspace: if we cannot know who is responsible for a cyberattack we cannot hold them to account.

It may be unrealistic to expect a single silver bullet organization for all aspects of the problem. Nonetheless, there were many at the workshop and, indeed, across the Munich Security Conference who agreed in broad terms that not having some kind of international, non-governmental platform focused on cyberspace (enabling best practice, exchanging information, examining the forensics around the attacks) will undermine future efforts to protect civilians in cyberspace.

Certainly, there are other things that also need to be done to protect civilians and civilian infrastructure from cyberattack by states. Rolling out the 2015 UNGGEs proposed norms of state behavior is one such thing because it will help governments manage the real politick of holding each other to account. The recent case of Sergei Skripal shows that even when there is a will to act, the options for constraining a sovereign state are comparatively limited. Even an incremental improvement in state behavior in cyberspace through applying the 2015 UNGGE suggestions would be a positive step, therefore. After all, today states are choosing not to invoke international law following cyberattacks, perhaps because there is uncertainty about those laws or perhaps because there is a belief that doing so will neither prevent future attacks nor result in any kind of remediation.

The workshop was a very valuable opportunity for Microsoft, and for me personally. By bringing governments, civil society, technical experts and business people together, it fostered exactly the kind of multi-stakeholder discussion that the future of cyberspace depends upon. The outputs of that discussion, especially the general view that a non-governmental international organization is needed, are something that my colleagues and I will certainly look to build on in the coming months. Furthermore, I am hopeful that such an organization will emerge, with time, and that there will be a genuine interest and impetus amongst the public and private sectors to use it. If they do so, they will help to make international law stronger in cyberspace, even in the face of state-sponsored cyberattacks. If that happens then the world will have taken an important step towards making cyberspace a safer and more stable place.

Categories: Uncategorized Tags:

The role that regions can and should play in critical infrastructure protection

March 5th, 2018 No comments

Todays report, Critical Infrastructure Protection in Latin America and the Caribbean 2018, developed in partnership between Microsoft and the Organization of American States (OAS), demonstrates the value of regional cooperation in global efforts to increase the security of the online environment where it matters most. It acknowledges that rather than focusing on all politics is local or living in a global village, regions have a key role to play in formulating policies and delivering outcomes for cybersecurity in general, and critical infrastructure protection (CIP) in particular.

Glocalization, a buzz phrase from the turn of the millennium, seems well suited to cybersecurity, given the Internets simultaneously global reach and local impact. This duality is important to keep in mind when considering the fact that protecting increasingly connected critical infrastructure is a challenge for nations all over the world, and it poses the question of whether the same solutions can be applied across the varied landscapes in which we operate. Regional elements are important in that context, as they provide us with an opportunity to investigate whether the solutions to global cybersecurity challenges need to be tailored to a particular context to be effective, whilst at the same time allowing us to retain a level of scale.

The latter comes about, as even allowing for the global nature of the online environment, we need to recognize that culture, geography, as well as economic relations and trade, are likely to result in a greater level of interconnectivity between neighboring states than far-flung places on opposite sides of the world. In the world of CIP, this means we are more likely to see the same provider operate across two countries in the same region, the same threat actor target linguistically-linked entities, and the consequences of the same cyber-attacks spill across borders.

Close communication and information sharing amongst and between the different regional stakeholders involved in CIP is therefore even more important. This report makes it clear that policymaking in the age of the Internet needs governments working alongside private industry to deliver effective results, leveraging the respective expertise and capabilities of the two groups. But it also reminds us that regional dialogue as well as bilateral discussions between neighboring states, and even between private sector operators in adjacent jurisdictions, helps protect us all.

The need for increased communication and new regional partnerships are only a few of the recommendations that the report puts forward. It also issues a call for risk management to be placed at the center of any CIP initiative, as well as for a move from cybersecurity towards cyber resilience. Moreover, and particularly relevant to the region of Latin America and the Caribbean, the report encourages a holistic approach to CIP at the national level, with governments urged to put forward cybersecurity frameworks, guidelines, and baselines for operators that are outcomes-focused and can withstand the quick pace of technological evolution. Similarly, the report recognizes the need to ensure a clear division of responsibilities in cybersecurity, and a dedicated effort to foster trust between the different entities and stakeholders that must be involved in protecting critical infrastructure.

The examples of global best practices that the report lays out will be recognizable to anyone with experience in the sector. Yet, the report goes a step further by placing these familiar practices in a regional context through the results of an innovative survey of CIP stakeholders across the region. At the global level, we might take for granted the logic behind why we engage in multi-stakeholder dialogue, or why a clear division of responsibilities is so important in modern technology. The survey shows that even in a region where very few CIP frameworks exist, public-private partnerships, within and across countries, have begun to emerge organically and are valued.

At the same time, the survey helps reinforce how much is still to be done on cybersecurity globally. To highlight just one example, almost half of the over 500 respondents, who are trying to protect the most vital national assets in Latin America and the Caribbean, have not yet endorsed risk management. How can the private sector and governments with advanced risk management capabilities best support capacity building in regions of the world trying to protect the infrastructure underpinning their societies, governments, and economies? I believe that this report is the beginning of a dialogue and roadmap for risk reduction.

Categories: Uncategorized Tags:

Best practices for securely moving workloads to Microsoft Azure

February 26th, 2018 No comments

Azure is Microsofts cloud computing environment. It offers customers three primary service delivery models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Adopting cloud technologies requires a shared responsibility model for security, with Microsoft responsible for certain controls and the customer others, depending on the service delivery model chosen. To ensure that a customers cloud workloads are protected, it is important that they carefully consider and implement the appropriate architecture and enable the right set of configuration settings.

Microsoft has developed a set of Azure security guidelines and best practices for our customers to follow. These guides can be found in theAzure security best practices and patterns documentation. In addition, were excited to announce the availability of the Center for Internet Securitys (CIS) Microsoft Azure Foundations Security Benchmark, developed in partnership with Microsoft. CIS is a non-profit entity focused on developing global standards and recognized best practices for securing IT systems and data against the most pervasive attacks.

The CIS Microsoft Azure Foundations Security Benchmark provides prescriptive guidance for establishing a secure baseline configuration for Microsoft Azure. Its scope is designed to assist organizations in establishing the foundation level of security for anyone adopting the Microsoft Azure cloud. The benchmark should not be considered as an exhaustive list of all possible security configurations and architecture but as a starting point. Each organization must still evaluate their specific situation, workloads, and compliance requirements and tailor their environment accordingly.

The CIS benchmark contains two levels, each with slightly different technical specifications:

  • Level 1 Recommended, minimum security settings that should be configured on any system and should cause little or no interruption of service or reduced functionality.
  • Level 2 Recommended security settings for highly secure environments and could result in some reduced functionality.

The CIS Microsoft Azure Foundations Security Benchmark is divided into the following sections:



No. of Rec. Controls

Identity and Access Management

Recommendations related to setting the appropriate identity and access management policies.


Azure Security Center

Recommendations related to the configuration and use of Azure Security Center.


Storage Accounts

Recommendations for setting storage account policies.


Azure SQL Services

Recommendations for securing Azure SQL Servers.


Azure SQL Databases

Recommendations for securing Azure SQL Databases.


Logging and Monitoring

Recommendations for setting logging and monitoring policies on your Azure subscriptions.



Recommendations for securely configuring Azure networking settings and policies.


Virtual Machines

Recommendations for setting security policies for Azure compute services, specifically virtual machines.


Other Security Considerations

Recommendations regarding general security and operational controls, including those related to Azure Key Vault and Resource Locks.


Total Recommendations



Each recommendation contains several sections, including a recommendation identification number, title, and description; level or profile applicability; rationale; instructions for auditing the control; remediation steps; impact of implementing the control; default value; and references. As an example, the first control contained in the benchmark is under the Identity and Access Management section and is titled: 1.1 Ensure that multi-factor authentication is enabled for all privileged users (Scored). A control is marked as Scored or Not Scored based on whether it can be programmatically tested. In this case, recommendation 1.1 can be audited leveraging the Microsoft Graph and PowerShell commandlet. The specific steps for auditing the control are contained in the “Audit” section for this specific recommendation. This recommendation is listed as a Level 1 control because it is only applied to Azure administrative users and would not have a company-wide impact or produce less functionality for users. The rationale for recommendation 1.1 is that Azure administrative accounts need to be protected due to their powerful privileges, and with multiple factors for authentication, an attacker would need to compromise at least two different authentication mechanisms, increasing the difficulty of compromise and thus reducing the risk to the Azure tenant.

The benchmark is freely available in PDF format on the CIS website.

You can also find more information on Azure Security Center and on Azure Active Directory. Both are critical solutions to securely deploying and monitoring Azure workloads and are covered in the new CIS benchmark.

Categories: Uncategorized Tags:

How to mitigate rapid cyberattacks such as Petya and WannaCrypt

February 21st, 2018 No comments

In the first blog post of this 3-part series, we introduced what rapid cyberattacks are and illustrated how rapid cyberattacks are different in terms of execution and outcome. In the second blog post, we provided some details on Petya and how it worked. In this final blog post, we will share:

  • Microsofts roadmap of recommendations to mitigate rapid cyberattacks.
  • Outside-in perspectives on rapid cyberattacks and mitigation methods based on a survey of global organizations.

Because of how critical security hygiene issues have become and how challenging it is for organizations to follow the guidance and the multiple recommended practices, Microsoft is taking a fresh approach to solving them. Microsoft is working actively with NIST, the Center for Internet Security (CIS), DHS NCCIC (formerly US-CERT), industry partners, and the cybersecurity community to jointly develop and publish practical guides on critical hygiene and to implement reference solutions starting with these recommendations on rapid cyberattacks as related to patch management.

Roadmap of prescriptive recommendations for mitigating rapid cyberattacks

We group our mitigation recommendations into four categories based on the effect they have on mitigating risk:

Mitigate software vulnerabilities that allow worms and attackers to enter and/or traverse an environment

Rapidly resume business operations after a destructive attack

Mitigate ability to traverse (spread) using impersonation and credential theft attacks

Reduce critical risk factors across all attack stages (prepare, enter, traverse, execute)

Figure 1: Key components of mitigation strategy for rapid cyberattacks

We recognize every organization has unique challenges and investments in cybersecurity (people and technology) and cannot possibly make every single recommendation a top nor immediate priority. Accordingly, we have broken down the primary (default) recommendations for mitigating rapid cyberattacks into three buckets:

  1. Quick wins: what we recommend organizations accomplish in the first 30 days
  2. Less than 90 days: what we recommend organizations accomplish in the medium term
  3. Next quarter and beyond: what we recommend organizations accomplish in the longer term

The following list is our primary recommendations on how to mitigate these attacks.

Figure 2: Microsofts primary recommendations for mitigating rapid cyberattacks

This list has been carefully prioritized based on Microsofts direct experience investigating (and helping organizations recover from) these attacks as well as collaboration with numerous industry experts. This is a default set of recommendations and should be tailored to each enterprise based on defenses already in place. You can read more about the details of each recommendation in the slide text and notes of the published slide deck.

In prioritizing the quick wins for the first 30 days, the primary considerations we used are:

  1. Whether the measure directly mitigates a key attack component.
  2. Whether most enterprises could rapidly implement the mitigation (configure, enable, deploy) without significant impact on existing user experiences and business processes.

Figure 3: Mapping each recommendation into the mitigation strategy components

In addition to the primary recommendations, Microsoft has an additional set of recommendations that could provide significant benefits depending on circumstances of the organization:

  1. Ensure outsourcing contracts and SLAs are compatible with rapid security response
  2. Move critical workloads to SaaS and PaaS as you are able
  3. Validate existing network controls (internet ingress, internal Lab/ICS/SCADA isolation)
  4. Enable UEFI Secure Boot
  5. Complete SPA roadmap Phase 2
  6. Protect backup and deployment systems from rapid destruction
  7. Restrict inbound peer traffic on all workstations
  8. Use application whitelisting
  9. Remove local administrator privileges from end-users
  10. Implement modern threat detection and automated response solutions
  11. Disable unneeded protocols
  12. Replace insecure protocols with secure equivalents (TelnetSSH, HTTP HTTPS, etc.)

There are specific reasons why these 12 recommendations, although helpful for certain organizations/circumstances, were excluded from the list of primary recommendations. You can read about those reasons in the slide notes of the published slide deckif interested.

Outside-in perspectives on rapid cyberattacks and mitigation methods

In late November 2017 Microsoft hosted a webinar on this topic and solicited feedback from the attendees which comprised of 845 IT professionals from small organizations to large global enterprises. Here are a few interesting insights from the poll questions.

Rapid cyberattack experience

When asked if they had experienced a rapid cyberattack (e.g. WannaCrypt, Petya or other), ~38% stated they did.

Awareness of SPA roadmap

When asked if theyre aware of Microsofts Securing Privileged Access (SPA) roadmap, most, 66%, stated that they were not.

Patching systems

When we asked within how many days (<7 or 30 or 90) they can patch various systems, it seems most respondents believed their team is good at patching quickly:

  • 83% can patch workstations within 30 days; 44% within 7 days
  • 81% can patch servers within 30 days; 51% within 7 days
  • 54% can patch Linux/Other devices within 30 days; 25% within 7 days

Removal of SMBv1

When asked where they are on the path towards removing SMBv1, 26% said they have completed removing it, another 21% said they are in progress or in the process of doing so, and ~18% more are planning to do so.

Adopting roadmap recommendations

When asked what is blocking them from adopting Microsofts roadmap recommendations for securing against rapid cyberattacks, the top three reasons respondents shared are:

  1. Lack of time
  2. Lack of resources
  3. Lack of support from upper management/executive buy-in

To help organizations overcome these challenges, Microsoft can be engaged to:

  • Assist with implementing the mitigations described in SPA Roadmap and Rapid Cyberattack Guidance.
  • Investigate an active incident with enterprise-wide malware hunting, analysis, and reverse engineering techniques. This includes providing tailored cyberthreat intelligence and strategic guidance to harden the environment against advanced and persistent attacks. Microsoft can provide onsite teams and remote support to help you investigate suspicious events, detect malicious attacks, and respond to security breaches.
  • Proactively hunt for persistent adversaries in your environment using similar methods as an active incident response (above).
    Contact your Microsoft Technical Account Manager (TAM) or Account Executive to learn more about how to engage Microsoft for incident response.

Contact your Microsoft Technical Account Manager (TAM) or Account Executive to learn more about how to engage Microsoft for incident response.

More information

We hope you found the 3-part blog series on the topic of rapid cyberattacks and some recommendations on how to mitigate them useful.

For more information and resources on rapid cyber attacks, please visit the additional links here:

On-demand webinar Protect Against Rapid Cyberattacks (Petya, WannaCrypt, and similar).

Additional resources

Tips to mitigate known rapid cyberattacks with Windows 10 (and Windows Defender Advanced Threat Protection):

Mitigate backup destruction by ransomware with Azure Backup security features

Detect leaked credentials in Azure Active Directory

Rapidly detect polymorphic and emerging threats and enable advanced protection with Windows Defender Antivirus cloud protection service (formerly Microsoft Active Protection Service (MAPS))

Apply network protection with Windows Defender Exploit Guard

Safeguard integrity of privileged accounts that administer and manage IT systems by considering Securing Privileged Access (SPA) roadmap

Mitigate risk of lateral escalation and Pass-the-Hash (PtH) credential replay attack with Local Admin Password Solution (LAPS)

Mitigate exploitation of SMBv1 vulnerability via Petya or other rapid cyberattack by following guidance on disabling SMBv1


Categories: Uncategorized Tags:

How a national cybersecurity agency can help avoid a national cybersecurity quagmire

February 19th, 2018 No comments

This last October we saw more countries than ever participate in initiatives to raise cybersecurity awareness. What was once largely a US approach has evolved into events and initiatives around the world by governments, civil society groups, and private sector partners. This increased breadth and depth of activity reflects governments increased understanding of the importance of cybersecurity, not only for their operations but for the lives of their citizens. My teams research indicates that today over half of the worlds countries are leading some sort of national level initiative for cybersecurity, with countless other efforts at sectoral, state, city, or other levels.

However, developing effective approaches to tackling cybersecurity at a national level isnt easy, especially if they are going to have widespread or long-lasting effects. The complexity of developing approaches for an issue that truly touches all aspects of the modern economy and society cannot be understated and if approached in the wrong way can create a quagmire of laws, bodies, and processes. The different aspects of cybersecurity such as promoting online safety, workforce skills development, and critical infrastructure protection, all cut across an unprecedented range of traditional government departments, from defense and foreign affairs, to education and finance. Effectively, cybersecurity is one of the first policy areas that challenges traditional national governance structures and policy making. It is unlikely to be the last, with issues such as artificial intelligence hard on its heels.

To deal with this challenge, governments are exploring new governance models. Some countries have created a dedicated department within a particular ministry, such as India. Others have looked at extending the work traditionally done by the police or a national computer security incident response team, such as Malaysia. Moreover, countries as diverse as Australia, France, Brazil, Indonesia, Tanzania, Belarus, Israel, and Singapore, already have specific bodies of government responsible for cybersecurity.

However, despite the fact that many countries have already taken steps to establish or strengthen their own cybersecurity bodies; no single, optimum, model can be pointed to. The reasons are many, from different governance set ups, to varying levels of investment and expertise available, to the fact that dealing with cybersecurity is a relatively new endeavor for governments.

Taking this variety into account, and coupling it with our own perspective and experience, Microsoft has collected good practices that we believe can support national engagement on cybersecurity. Today we are releasing a new whitepaper: Building an Effective National Cybersecurity Agency. Its core insights center around the following set of recommendations for governments in order to avoid becoming bogged down in cybersecurity challenges that are otherwise avoidable:

  1. Appoint a single national cybersecurity agency.Having a single authority creates a focal point for key functions across the government, which ensures policies are prioritized and harmonized across the nation.
  2. Provide the national cybersecurity agency with a clear mandate. Cybersecurity spans different stakeholders with overlapping priorities. Having a clear mandate for the agency will help set expectations for the roles and responsibilities and facilitate the intra-governmental processes.
  3. Ensure the national cybersecurity agency has appropriate statutory powers. Currently, most national cybersecurity agencies are established not by statute but by delegating existing powers from other parts of government. As cybersecurity becomes an issue for national legislature, agencies might have to be given clear ownership of implementation.
  4. Implement a five-part organizational structure. The five-part structure we propose in the paper allows for a multifaceted interaction across internal government and regulatory stakeholders, as well as external and international stakeholders, and aims to tackle both regulatory and other cybersecurity aspects.
  5. Expect to evolve and adapt. Regardless of how the structure of the national cybersecurity agency begins, the unavoidability of change in the technology and threat landscape will require it to evolve and adapt over time to be able to continue to fulfill its mandate.

As the challenges and opportunities that come as a result of ICT proliferation continue to evolve, governments will need to ensure they are sufficiently equipped to face them, both today and in the future. Bringing together diverse stakeholders across different agencies, such as defense, commerce, and foreign affairs, and backgrounds, including those from law, engineering, economics, ad policy, will enable our society to both deal with the threats and harness the opportunities of cyberspace. It is this diversity of stakeholders that contributes to the challenge cybersecurity poses for traditional governance.

But cybersecurity is the first of many emerging areas that necessitates new and creative solutions that allows policymakers to work hand in hand with their counterparts across government, civil society and industry. For cybersecurity, as well as the issues to come, cooperation is the underpinning of achieving these goals. However, cooperation cannot be created organically, it must grow from an effectively structured governance system. Establishing a national cybersecurity agency will enable governments to do just that.

Categories: Uncategorized Tags:

Developing an effective cyber strategy

February 7th, 2018 No comments

The word strategy has its origins in the Roman Empire and was used to describe the leading of troops in battle. From a military perspective, strategy is a top-level plan designed to achieve one or more high-order goals. A clear strategy is especially important in times of uncertainty as it provides a framework for those involved in executing the strategy to make the decisions needed for success.

In a corporate or government entity, the primary role of the Chief Information Security Officer (CISO) is to establish a clear cybersecurity strategy and oversee its execution. To establish an effective strategy, one must first understand, and it is recommended to document, the following:

  • Resources. The most critical component of a successful strategy is the proper utilization of available resources. As such, a CISO must have a clear picture of their annual budget, including operating and capital expenditures. In addition, the CISO must understand not just the number of vendors and full-time employees under their span of control, but also the capabilities and weaknesses of these resources. The CISO must also have an appreciation for the capabilities of key resources that are essential to effective security but not necessarily under their direct supervision, such as server and desktop administrators, the team responsible for patching, etc. One of the most difficult aspects of the CISO job is that to be successful you must positively influence the actions of other teams whose jobs are critical to the success of the security program, and your career, but who are not under your direct control.
  • Business Drivers. At the end of the day, CISOs have a finite amount of resources to achieve goals and cannot apply the same level of protection to all digital assets. To help make resource allocation decisions, the CISO must clearly understand the business they have been charged with protecting. What is most important to the success of the business? Which lines of business produce the most revenue, and which digital assets are associated with those lines? For governments, which services are essential for residents’ health and for maintaining government operations, and which digital assets are associated with those services and functions?
  • Data. Data is the lifeblood of most companies and is often the target of cyber criminals, whether to steal or encrypt for ransom. Once business drivers have been identified, the CISO should inventory the data that is important to the lines of business. This should include documenting the format, volume, and locations of the data and the associated data steward. In large organizations, this can be extremely challenging, but it is essential to have a clear picture of the storage and processing of the entitys crown jewels.
  • Controls. Before formulating a strategy, the CISO must gain an understanding of the status of the safeguards or countermeasures that have been deployed within an environment to minimize the security risks posed to digital assets. These will include controls to minimize risks to the confidentiality, integrity, or availability of the assets. In determining the sufficiency of a control, assess its design and operating effectiveness. Does the control cover all assets or a subset? Is the control effective at reducing the risk to an acceptable level or is the residual risk still high? For example, one control found to be effective in minimizing risk to the confidentiality of data is to require a second factor of authentication prior to granting access to sensitive records. If such a control is implemented, what percentage of users require a second authentication factor before accessing the companys most sensitive data? What is the likelihood that a user will acknowledge a second factor in error as the result of a phishing test?
  • Threats. Identifying the threats to an organization is one of the more difficult tasks in developing a cyber strategy, as cyber threats tend to be asymmetric and constantly evolving. Still, it is important to identify the most likely threat actors and the motivations, tactics, techniques, and procedures used to achieve their goals.

Once a CISO has a clear picture of the items discussed above, they can begin formulating a strategy appropriate to the task at hand. There is no one size fits all approach, as each organization is unique, but there are models and frameworks that have proven helpful over time, including those developed by the National Institute of Standards and Technology, Cyber Kill Chain, Center for Internet Security, SANS, and the Australian Signals Directorate, among others. An effective strategy must also consider human and organizational dynamics. For example, employees will typically work around a control that increases the actual, or perceived, amount of effort to perform a given task, especially when they feel that the effort is not commensurate with the threat being addressed.

At Microsoft, we are continuously evaluating the current threats faced by our customers and building products and services to help CISOs execute their strategies. The design of our products not only accounts for the techniques utilized by cyber attackers, but also incorporates features that address the human dynamics within an enterprise and the staff and retention challenges faced by security teams. A few examples of these design principles in practice include building security features and functions within our productivity tools such as Office 365 Advanced Threat Protection, using auto-classification to reduce the workload on end users with Azure Information Protection, and increasing the efficiency and effectiveness of security teams with Windows Defender Advanced Threat Protection.

Categories: Uncategorized Tags:

Overview of Petya, a rapid cyberattack

February 5th, 2018 No comments

In the first blog post of this 3-part series, we introduced what rapid cyberattacks are and illustrated how they are different in terms of execution and outcome. Next, we will go into some more details on the Petya (aka NotPetya) attack.

How Petya worked

The Petya attack chain is well understood, although a few small mysteries remain. Here are the four steps in the Petya kill chain:

Figure 1:How the Petya attack worked

  1. Prepare – The Petya attack began with a compromise of the MEDoc application. As organizations updated the application, the Petya code was initiated.
  2. Enter – When MEDoc customers installed the software update, the Petya code ran on an enterprise host and began to propagate in the enterprise.
  3. Traverse – The malware used two means to traverse:

    • Exploitation Exploited vulnerability in SMBv1 (MS17-010).
    • Credential theft Impersonated any currently logged on accounts (including service accounts).
    • Note that Petya only compromised accounts that were logged on with an active session (e.g. credentials loaded into LSASS memory).

  4. Execute – Petya would then reboot and start the encryption process. While the screen text claimed to be ransomware, this attack was clearly intended to wipe data as there was no technical provision in the malware to generate individual keys and register them with a central service (standard ransomware procedures to enable recovery).

Unknowns and Unique Characteristics of Petya:

Although it is unclear if Petya was intended to have as widespread an impact as it ended up having, it is likely that this attack was built by an advanced group, considering the following:

  • The Petya attack wiped the event logs on the system, which is unneeded as the drive was wiped later anyways. This leaves an open question on whether this is just standard anti-forensic practice (as is common for many advanced attack groups) or whether there were other attack actions/operations being covered up by Petya.
  • The supply chain approach taken by Petya requires a well-funded adversary with a high level of investment into attack skills/capability. Although supply chain attacks are rising, these still represent a small percentage of how attackers get into corporate environments and require a higher degree of sophistication to execute.

Petya and Traversal/Propagation

Our observation was that Petya spread more by using identity impersonation techniques than through MS17-010 vulnerability exploitation. This is likely because of the emergency patching initiatives organizations followed to deploy MS17-010 in response to the WannaCrypt attacks and associated publicity.

The Petya attacks also resurfaced a popular misconception about mitigating lateral traversal which comes up frequently in targeted data theft attacks. If a threat actor has acquired the credentials needed for lateral traversal, you can NOT block the attack by disabling execution methods like PowerShell or WMI. This is not a good choke point because legitimate remote management requires at least one process execution method to be enabled.

Figure 2:How the Petya attack spreads

Youll see in the illustration above that achieving traversal requires three technical phases:

1st phase: Targeting Identify which machines to attack/spread to next.

Petyas targeting mechanism was consistent with normal worm behavior. However, Petya did include a unique innovation where it acquired IPs to target from the DHCP subnet configuration from servers and DCs to accelerate its spread.

2nd phase: Privilege acquisition Gain the privileges required to compromise those remote machines.

A unique aspect of Petya is that it used automated credential theft and re-use to spread, in addition to the vulnerability exploitation. As mentioned earlier, most of the propagation in the attacks we investigated was due to the impersonation technique. This resulted in impersonation of the SYSTEM context (computer account) as well as any other accounts that were logged in to those systems (including service accounts, administrators, and standard users).

3rd phase: Process execution Obtain the means to launch the malware on the compromised machine.

This phase is not an area we recommend focusing defenses on because:

  1. An attacker (or worm) with legitimate credentials (or impersonated session) can easily use another available process execution method.
  2. Remote management by IT operations requires at least one process execution method to be available.

Because of this, we strongly advise organizations to focus mitigation efforts on the privilege acquisition phase (2) for both rapid destruction and targeted data theft attacks, and not prioritize blocking at the process execution phase (3).

Figure 3:Most Petya propagations were due to impersonation (credential theft)

Because of the dual channel approach to propagation, even an organization that had reached 97% of their endpoints with MS17-010 patching was infected enterprise-wide by Petya. This shows that mitigating just one vector is not enough.

The good news here is that any investment made into credential theft defenses (as well as patching and other defenses) will directly benefit your ability to stave off targeted data theft attacks because Petya simply re-used attack methods popularized in those attacks.

Attack and Recovery Experience: Learnings from Petya

Many impacted organizations were not prepared for this type of disaster in their disaster recovery plan. The key areas of learnings from real world cases of these attacks are:

Figure 4:Common learnings from rapid cyberattack recovery

Offline Recovery Required Many organizations affected by Petya found that their backup applications and Operating System (OS) deployment systems were taken out in the attack, significantly delaying their ability to recover business operations. In some cases, IT staff had to resort to printed documentation because the servers housing their recovery process documentation were also down.

Communications down Many organizations also found themselves without standard corporate communications like email. In almost all cases, company communications with employees was reliant on alternate mechanisms like WhatsApp, copy/pasting broadcast text messages, mobile phones, personal email addresses, and Twitter.

In several cases, organizations had a fully functioning Office 365 instance (SaaS services were unaffected by this attack), but users couldnt access Office 365 services because authentication was federated to the on premises Active Directory (AD), which was down.

More information

To learn more about rapid cyber attacks and how to protect against them, watch the on-demand webinar: Protect Against Rapid Cyberattacks (Petya, WannaCrypt, and similar).

Look out for the next and final blog post of a 3-part series to learn about Microsoft’s recommendations on mitigating rapid cyberattacks.

Categories: Uncategorized Tags:

IGF proves the value of bottom-up, multi-stakeholder model in cyberspace policy-making

January 29th, 2018 No comments

In December, the Internet Governance Forum (IGF) brought the world together to talk about the internet. I tend to take a definite interest in cybersecurity, but there were many more important topics discussed. They ranged from diversity in the technology sector through to philosophy in the digital age. Cybersecurity was, nonetheless, a major theme. My colleagues and I found an agenda packed with varied sessions that sought to tackle anything from effective cooperation between CERTS, the difficulties in developing an international framework for cybersecurity norms and other issues the Digital Geneva Convention touches on, to the very real cross-border legal challenges in cloud forensics.

The real strength of the IGF is not just its breadth of topics, but also the way in which it deliberately fosters multi-stakeholder discussions. Delegates have equal voices, whether they are civil society groups, governments, or businesses. And while there were differences in opinion and perspectives, all are heard and as such contribute to richer conversations, and ultimately more valuable outcomes.

Certainly, the expectation is not that there would be immediate policy outcomes from the IGF. Ideas need time to grow and evolve. The exchanges of ideas can and does contribute to decision-making for Microsoft, and hopefully across the other participants attending. I found it particularly valuable to hear the voices and opinions of the civil society. Whether it was hearing a perspective of humanitarian actors, or understanding the challenges related to cybersecurity policy making in emerging markets.

Microsoft believes that this wider discussion among stakeholders leads to deeper understanding of the complex challenges posed by cyberspace. Thats why we took the opportunity of this years IGF to organize a series of both smaller and individualized, as well as larger discussions around the different aspects of our proposal for a Digital Geneva Convention. The discussions investigated what the industry tech accord could involve and what the civil society would like us to do as an industry, but they also looked at the feasibility of creating a convention that would protect civilians and civilian infrastructure in cyberspace from harm by states and at what the path on that decade long road would be. We will be taking these insights and ideas back with us and incorporating them into our plans for 2018.

The Digital Geneva Convention was however by far not the only cybersecurity-focused topic we engaged in. There were sessions that looked at increasing CERT capacities, encryption, the exchange of cybersecurity best practices within IGF, as well those that sought to outline the future of global cybersecurity capacity building, which we believe is essential to the worlds collective ability to respond to cyber-attacks and needed both for individual countries and at the level of regional groupings such as ASEAN and the OAS. We also previewed the research that we are planning to publish shortly that looks at the latest global cybersecurity policy and legislative trends, analyzing data from over 100 countries and highlighting increased activity across critical infrastructure policies, militarization of cyberspace continues, expansion of law enforcement powers, cybercrime legislation, and cybersecurity skills concerned. Overall, my colleagues across Microsoft contributed to over 20 different sessions and panels, including on affordable access to the internet, where we were able to outline elements of our Airband Initiative, digital civility, where we presented the results of our latest study (to be released publicly shortly), future of work and artificial intelligence, and others.

Multi-stakeholder fora like the IGF are essential to preserving an open, global, safe, secure, resilient, and interconnected Internet. What the world needs is more such broad-based, holistic policy discussions. When it comes to building policy in cyberspace, policy-makers must acknowledge the interdependence of economic, socio-cultural, technological, and governance factors. That means they should actively foster more multi-stakeholder policy development for a, learning from the IGF. For the technology sector and civil society groups, our task must be to continue to push for inclusive, open, transparent, bottom-up policy-making, and to make the most of the opportunities that do exist.

Categories: Uncategorized Tags:

Understanding the performance impact of Spectre and Meltdown mitigations on Windows Systems

January 9th, 2018 No comments

Last week the technology industry and many of our customers learned of new vulnerabilities in the hardware chips that power phones, PCs and servers. We (and others in the industry) had learned of this vulnerability under nondisclosure agreement several months ago and immediately began developing engineering mitigations and updating our cloud infrastructure. In this blog, Ill describe the discovered vulnerabilities as clearly as I can, discuss what customers can do to help keep themselves safe, and share what weve learned so far about performance impacts.

What Are the New Vulnerabilities?

On Wednesday, Jan. 3, security researchers publicly detailed three potential vulnerabilities named Meltdown and Spectre. Several blogs have tried to explain these vulnerabilities further a clear description can be found via Stratechery.

On a phone or a PC, this means malicious software could exploit the silicon vulnerability to access information in one software program from another. These attacks extend into browsers where malicious JavaScript deployed through a webpage or advertisement could access information (such as a legal document or financial information) across the system in another running software program or browser tab. In an environment where multiple servers are sharing capabilities (such as exists in some cloud services configurations), these vulnerabilities could mean it is possible for someone to access information in one virtual machine from another.

What Steps Should I Take to Help Protect My System?

Currently three exploits have been demonstrated as technically possible. In partnership with our silicon partners, we have mitigated those through changes to Windows and silicon microcode.

Exploited Vulnerability CVE Exploit
Public Vulnerability Name Windows Changes Silicon Microcode Update ALSO Required on Host
Spectre 2017-5753 Variant 1 Bounds Check Bypass Compiler change; recompiled binaries now part of Windows Updates

Edge & IE11 hardened to prevent exploit from JavaScript

Spectre 2017-5715 Variant 2 Branch Target Injection Calling new CPU instructions to eliminate branch speculation in risky situations Yes
Meltdown 2017-5754 Variant 3 Rogue Data Cache Load Isolate kernel and user mode page tables No


Because Windows clients interact with untrusted code in many ways, including browsing webpages with advertisements and downloading apps, our recommendation is to protect all systems with Windows Updates and silicon microcode updates.

For Windows Server, administrators should ensure they have mitigations in place at the physical server level to ensure they can isolate virtualized workloads running on the server. For on-premises servers, this can be done by applying the appropriate microcode update to the physical server, and if you are running using Hyper-V updating it using our recent Windows Update release. If you are running on Azure, you do not need to take any steps to achieve virtualized isolation as we have already applied infrastructure updates to all servers in Azure that ensure your workloads are isolated from other customers running in our cloud. This means that other customers running on Azure cannot attack your VMs or applications using these vulnerabilities.

Windows Server customers, running either on-premises or in the cloud, also need to evaluate whether to apply additional security mitigations within each of their Windows Server VM guest or physical instances. These mitigations are needed when you are running untrusted code within your Windows Server instances (for example, you allow one of your customers to upload a binary or code snippet that you then run within your Windows Server instance) and you want to isolate the application binary or code to ensure it cant access memory within the Windows Server instance that it should not have access to. You do not need to apply these mitigations to isolate your Windows Server VMs from other VMs on a virtualized server, as they are instead only needed to isolate untrusted code running within a specific Windows Server instance.

We currently support 45 editions of Windows. Patches for 41 of them are available now through Windows Update. We expect the remaining editions to be patched soon. We are maintaining a table of editions and update schedule in our Windows customer guidance article.

Silicon microcode is distributed by the silicon vendor to the system OEM, which then decides to release it to customers. Some system OEMs use Windows Update to distribute such microcode, others use their own update systems. We are maintaining a table of system microcode update information here. Surface will be updated through Windows Update starting today.


Guidance on how to check and enable or disable these mitigations can be found here:


One of the questions for all these fixes is the impact they could have on the performance of both PCs and servers. It is important to note that many of the benchmarks published so far do not include both OS and silicon updates. Were performing our own sets of benchmarks and will publish them when complete, but I also want to note that we are simultaneously working on further refining our work to tune performance. In general, our experience is that Variant 1 and Variant 3 mitigations have minimal performance impact, while Variant 2 remediation, including OS and microcode, has a performance impact.

Here is the summary of what we have found so far:

  • With Windows 10 on newer silicon (2016-era PCs with Skylake, Kabylake or newer CPU), benchmarks show single-digit slowdowns, but we dont expect most users to notice a change because these percentages are reflected in milliseconds.
  • With Windows 10 on older silicon (2015-era PCs with Haswell or older CPU), some benchmarks show more significant slowdowns, and we expect that some users will notice a decrease in system performance.
  • With Windows 8 and Windows 7 on older silicon (2015-era PCs with Haswell or older CPU), we expect most users to notice a decrease in system performance.
  • Windows Server on any silicon, especially in any IO-intensive application, shows a more significant performance impact when you enable the mitigations to isolate untrusted code within a Windows Server instance. This is why you want to be careful to evaluate the risk of untrusted code for each Windows Server instance, and balance the security versus performance tradeoff for your environment.

For context, on newer CPUs such as on Skylake and beyond, Intel has refined the instructions used to disable branch speculation to be more specific to indirect branches, reducing the overall performance penalty of the Spectre mitigation. Older versions of Windows have a larger performance impact because Windows 7 and Windows 8 have more user-kernel transitions because of legacy design decisions, such as all font rendering taking place in the kernel. We will publish data on benchmark performance in the weeks ahead.


As you can tell, there is a lot to this topic of side-channel attack methods. A new exploit like this requires our entire industry to work together to find the best possible solutions for our customers. The security of the systems our customers depend upon and enjoy is a top priority for us. Were also committed to being as transparent and factual as possible to help our customers make the best possible decisions for their devices and the systems that run organizations around the world. Thats why weve chosen to provide more context and information today and why we released updates and remediations as quickly as we could on Jan. 3. Our commitment to delivering the technology you depend upon, and in optimizing performance where we can, continues around the clock and we will continue to communicate as we learn more.


Categories: Uncategorized Tags:

Application fuzzing in the era of Machine Learning and AI

January 3rd, 2018 No comments

Proactively testing software for bugs is not new. The earliest examples date back to the 1950s with the term fuzzing. Fuzzing as we now refer to it is the injection of random inputs and commands into applications. It made its debut quite literally on a dark and stormy night in 1988. Since then, application fuzzing has become a staple of the secure software development lifecycle (SDLC), and according to Gartner*, security testing is growing faster than any other security market, as AST solutions adapt to new development methodologies and increased application complexity.

We believe there is good reason for this. The overall security risk profile of applications has grown in lockstep with accelerated software development and application complexity. Hackers are also aware of the increased vulnerabilities and, as the recent Equifax breach highlights, the application layer is highly targeted. Despite this, the security and development groups within organizations cannot find easy alignment to implement application fuzzing.

While DevOps is transforming the speed at which applications are created, tested, and integrated with IT, that same efficiency hampers the ability to mitigate identified security risks and vulnerabilities, without impacting business priorities. This is exactly the promise that machine learning, artificial intelligence (AI), and the use of deep neural networks (DNN) are expected to deliver on in evolved software vulnerability testing.

Most customers I talk to see AI as a natural next step given that most software testing for bugs and vulnerabilities is either manual or prone to false positives. With practically every security product claiming to be machine learning and AI-enabled, it can be hard to understand which offerings can deliver real value over current approaches.

Adoption of the latest techniques for application security testing doesnt mean CISOs must become experts in machine learning. Companies like Microsoft are using the on-demand storage and computing power of the cloud, combined with experience in software development and data science, to build security vulnerability mitigation tools that embed this expertise in existing systems for developing, testing, and releasing code. It is important, however, to understand your existing environment, application inventory, and testing methodologies to capture tangible savings in cost and time. For many organizations, application testing relies on tools that use business logic and common coding techniques. These are notoriously error-prone and devoid of security expertise. For this latter reason, some firms turn to penetration testing experts and professional services. This can be a costly, manual approach to mitigation that lengthens software shipping cycles.

Use cases

Modern application security testing that is continuous and integrated with DevOps and SecOps can be transformative for business agility and security risk management. Consider these key use cases and whether your organization has embedded application security testing for each:

  • Digital Transformation moving applications to the cloud creates the need to re-establish security controls and monitoring. Fuzzing can uncover errors and missed opportunities to shore up defenses. Automated and integrated fuzzing can further preserve expedited software shipping cycles and business agility.
  • Securing the Supply Chain Open Source Software (OSS) and 3rd party applications are a common vector of attack, as we saw with Petya, so a testing regimen is a core part of a plan to manage 3rd party risk.
  • Risk Detection whether building, maintaining, or refactoring applications on premises, the process and risk profile have become highly dynamic.Organizations need to be proactive to uncover bugs, holes and configuration errors on a continuous basis to meet both internal and regulatory risk management mandates.

Platform leverage

Of course, software development and testing are about more than just tools. The process to communicate risks to all stakeholders, and to act, is where the real benefit materializes. A barrier to effective application security testing is the highly siloed way that testing and remediation are conducted. Development waits for IT and security professionals to implement the changesslowing deployment and time to market. Legacy application security testing is ready for disruption and the built-in approach can deliver long-awaited efficiency in the development and deployment pipeline. Digital transformation, supply chain security, and risk detection all benefit from speed and agility. Lets consider the DevOps and SecOps workflows possible on a Microsoft-based application security testing framework:

  • DevOps Continuous fuzzing built into the DevOps pipeline identifies bugs and feeds them to the continuous integration and deployment environment (i.e. Visual Studio Team Services and Team Foundation Server). Developers and stakeholders are uniformly advised of risky code and provided the option of running additional Azure-based fuzzing techniques. For apps in production that are found to be running risky code, IT pros can mitigate risks by using PowerShell and Group Policy (GPO) to enable the features of Windows Defender Exploit Guard. While the apps continue to run, the attack surface can be reduced, and connection scenarios which increase risk are blocked. This gives teams time to develop and implement mitigations without having to take the applications entirely offline.
  • SecOps – Azure-hosted containers and VMs, as well as on-premise machines, are scanned for risky applications and code including OSS. The results inform Microsofts various desktop, mobile, and server threat protection regimes, including application whitelisting. Endpoints can be scanned for the presence of the risky code and administrators are informed through Azure Security Center. Mitigations can also be deployed to block those applications implicated and enforce conditional access through Azure Active Directory.

Cloud and AI

Machine learning and artificial intelligence are not new, but the relatively recent availability of graphics processing units (GPUs) have brought their potential to mainstream by enabling faster (parallel) processing of large amounts of data. Our recently announced Microsoft Risk Detection (MSRD) service is a showcase of the power of the cloud and AI to evolve fuzz testing. In fact, Microsofts award winning work in a specialized area of AI called constraint solving has been 10 years in the making and was used to produce the worlds first white-box fuzzer.

A key to effective application security testing is the inputs or seeds used to establish code paths and bring about crashes and bug discovery. These inputs can be static and predetermined, or in the case of MSRD, dynamic and mutated by training algorithms to generate relevant variations based on previous runs. While AI and constraint solving are used to tune the reasoning for finding bugs, Azure Resource Manager dynamically scales the required compute up or down creating a fuzzing lab that is right-sized for the customers requirement. The Azure based approach also gives customers choices in running multiple fuzzers, in addition to Microsofts own, so the customer gets value from several different methods of fuzzing.

The future

For Microsoft, application security testing is fundamental to a secure digital transformation. MSRD for Windows and Linux workloads is yet another example of our commitment to building security into every aspect of our platform. While our AI-based application fuzzing is unique, Microsoft Research is already upping the ante with a new project for neural fuzzing. Deep neural networks are an instantiation of machine learning that model the human brain. Their application can improve how MSRD identifies fuzzing locations and the strategies and parameters used. Integration with our security offerings is in the initial phases, and by folding in more capabilities over time we remove the walls between IT, developers, and security, making near real-time risk mitigation a reality. This is the kind of disruption that, as a platform company, Microsoft uniquely brings to application security testing for our customers and serves as further testament for the power of built-in.

* Gartner: Magic Quadrant for Application Security Testing published: 28 February 2017 ID: G00290926

Categories: Uncategorized Tags: