Archive

Author Archive

Here is Homeland Security, black swans, and thwarted cyberattacks

May 9th, 2018 No comments

Last week, I had the honor of addressing The Homeland Security Training Institute (HSTI) at the College of DuPage as part of the HSTI Live educational series. The event featured other prominent speakers at the forefront of cybersecurity defense, including:

Dave Tyson, CEO of CISO Insights, a global cybersecurity consultant and Nicole Darden Ford, Vice President and Chief Information Security Officer of Baxter Healthcare. Dave broke down complex cybersecurity issues making them relatable to the audience, a skill hes also honed through his other business venture, CEO of cybereasylearning.com. Nicole shared her firsthand experiences dealing with the challenges and the successes of a modern CISO in the healthcare industry. Nicole has global responsibility for information security as well as information technology quality compliance and information governance.

I presented findings from the most recent Microsoft Security Intelligence Report v23, diving into themes and specifics behind old and new malware propagated through massive botnets, and phishing, and ransomware attacks. And, importantly, providing advice and guidance on steps organizations can take to help protect themselves and their critical assets.

It was a great set of talks that spawned a lot of interesting dialogs. After the event, I was stopped by someone who asked me why our cyber defenses arent sophisticated enough to stop all cyberattacks before they penetrate our systems. Its a fair question, especially when you consider the substantial amount of annual investment organizations make in hardware, software, and human capital. For example, its not uncommon for regulated and larger businesses to have teams dedicated to 24/7, 365 surveillance and monitoring of their systems. Yet, the bad guys still get in, plant malware, compromise proprietary information, and reveal sensitive customer data.

As I thought more deeply about the question of why we cant stop all attacks, I was reminded of Nassim Nicholas Talebs seminal book The Black Swan: The Impact of the Highly Improbable. Taleb dives into how some negative events, no matter how improbable they are, can cause massive consequences. This is within cybersecurity also as demonstrated by for example WannaCry. The attack cost organizations across the globe billions of dollars and made headlines for weeks! Yet far fewer people have heard of Bad Rabbit, largely because it was identified and stopped by Windows Defender Anti Virus in 14 minutes before it caused widespread damage. Catching new malware isnt easy, but using layered machine learning from device to cloud and sharing that learning across systems rapidly is helping to find and catch new malware more quickly. With Bad Rabbit, after the first device encounter, the cloud protection service used detonation-based malware classification to block the file and protect subsequent users who downloaded the dangerous file.

Another example of rapid intelligent response spoiling a massive attack comes from March of this year. The malware, named Dofoil was a cryptocurrency miner that exhibited advanced cross-process injection techniques, persistence mechanisms, and evasion methods. Windows Defender AV picked up on behavior-based signals to identify the infection attempts and block more than 80,000 instances of the attack within milliseconds.

What is often overlooked or unseen in all the headlines, is that most of our cyber defenses are deeply effective, especially when you consider the sheer number of attacks enterprises face every day. Its easy to lose sight of this when a devastating attack occurs and controls the news narrative. Microsoft threat data shared from partners, researchers, and law enforcement worldwide gives a clearer picture of the massive scale of data were regularly protecting. In a month, using the Intelligent Security Graph, were analyzing 400 billion emails, scanning 1.2 billion devices and 18 billion Bing web pages while detecting 5 billion threats. Again, Im not suggesting we catch everything malicious as billions of pieces of data and hardware are scanned. We dont. Some malware inevitably gets through our protective layers. But when you consider the scale of attacks, and the prominence of digital products and tools in enterprises, its important to remember that we as an industry of cybersecurity professionals very often get it right. Users all over the world are accustomed to switching on their devices and safely opening hundreds of emails a day, seeing the correct balance in their mobile banking app, and trusting their GPS to accurately guide them from point A to point B. Our digital lives are deeply intertwined with our personal and work lives, and more often than not, they coexist in harmony.

In sum, its true; the cybersecurity industry cannot claim the ability to stop all cyberattacks. But lets not overlook all of the attacks that are detected and prevented every day. The hardworking cybersecurity professionals, the same ones I shared the stage with at The Homeland Security Training Institute at the College of DuPage, are advancing our capabilities to thwart cybercrimes every day. Yes, weve got work to do, this is an ongoing battle, but the wins and ongoing work deserve to be recognized too.

Categories: Uncategorized Tags:

Overwhelmed by overchoice at RSA Conference 2018

April 25th, 2018 No comments

As over 500 companies vied for mindshare at this years RSA conference – a cacophony of vendors pitching thousands of products from brightly colored booths – it reminded me of how challenging it was for me to separate signal from noise when I was managing global networks. And the rapid growth of vendors and solutions in the past few years makes me wonder how overwhelming the choice must seem for CISOs today.

This challenge extends well beyond the show floor of RSA. Security Operations Center (SOC) analysts parse through thousands to even millions of alerts per day working as quickly as possible to investigate them and determine which ones represent real threats. Enterprises need tools that can help them identify and contain threats quickly, but the SOC analyst dilemma of too many alerts is echoed on the show floor. There are just too many vendor and solution choices to pick from. This phenomenon known as overchoice leads to paralysis, obstructing our ability make confident choices and seek timely guidance. Psychologists have long studied this construct and found that along with paralysis, the presence of too many options can even push people into decisions that work against their best interests.

As more than 50,000 RSA attendees worked their way across the conference center floor, I watched as they encountered an endless array of ever-changing acronyms, software, and hardware to address problems they probably didnt even know they had. In the quest to create and name the next generation of most innovative solutions, new categories and acronyms abound from SIM to SEM to SOAR, and AV to EPP to EDR. Unfortunately, these new solutions can come so fast that the features may fuzz into buzzword bingo for attendees. With IoT and the intelligent edge, there are new security scenarios for enterprises to solve for. With that come new categories of security, and new offerings flood the market. Enterprise professionals are left fighting an uphill battle across a foggy landscape.

There is a way to address all this complexity. It starts with you and your enterprise. As the person who knows your enterprise best, you are positioned to drive the decision-making process based on real-world scenarios and everyday learnings.

Vendors often try to identify problems, solve them, and hope someone needs the solutions. But every enterprise is unique, and not all threats are prioritized evenly across the board. If CISOs can assess enterprise-wide learnings and lean on the vendors to interpret and understand real-world issues, a more coherent strategy and product should emerge.

Of course, its not always easy for enterprise CISOs to understand and prioritize their needs. If this is the case in your enterprise, third-party consultants can help assess your current security posture and forge an action plan for optimization. Once a plan is created, the buyer should drive the process and avoid unnecessary distractions that lead to evaluating dozens of options and trying to understand where the puzzle pieces fit together. CISOs can also lean on the vendor to help interpret and understand the enterprises defined needs once they understand their needs and have prioritized them.

To better facilitate this approach, first ask, “What is the business problem Im trying to solve? For example: Retail organizations may want to enhance their online store to include customer intelligence to provide a better customer experience. What type of privacy security will be required to do this? Will there be compliance requirements to do this? If general themes emerge rather than more nuanced security gaps, CISOs can use a known framework, like the NIST Cybersecurity Framework. Its a useful tool for managing cybersecurity outcomes, and it covers all the verticals of cybersecurity, making it easier to adopt and join with other frameworks you might also need to incorporate in your security program.

Once you have a solid grasp of the enterprise security requirements, start to look for solutions that specifically meet those needs. Once the business problems are identified and the researching of solutions takes place, youll bump into those pervasive acronyms again. Dont get sucked in – resist the urge to solve for every potential problem vendors are trying to solve for. Focus on the vendors whose solutions specifically address your enterprises problems and meet your requirements. Ask your peers for their own firsthand experience. Ask them which solutions have or haven’t worked for them. You can even ask vendors for references to speak with.

Once promising vendor solutions emerge, confirm that the solution will solve your enterprises problem. Get proof that it will – which doesnt necessarily equate to knowing every single mathematical detail about the algorithms used in a solutions ML engine or reviewing each line of code. But it does mean seeing the solution in action. Demo and test-drive it, preferably in your own environment. This approach is about the buyer driving the process, and staying engaged. Like most things related to our safety and security, the more engagement, the better the outcome.

These are active times in cybersecurity. The great news is a lot of innovative, smart, and motivated companies are working hard to build intelligent solutions to thwart cyberattacks. But were all at risk of paralysis from overchoice. Stay on target by focusing on your business problems and needs, and demand that vendors cut through the buzz to focus on proving they can deliver results. See what Microsoft presented and our latest security innovations at the RSA Conference.

Categories: RSA Conference Tags:

Tapping the intelligent cloud to make security better and easier

April 16th, 2018 No comments

There has been a distinct shift in my conversations with customers over the last year. Most have gone from asking can we still keep our assets secure as we adopt cloud services?, to declaring, we are adopting cloud services in order to improve our security posture. The driving factor is generally a realization that a cloud services provider can invest more in security, do the job better, and just make life simpler for overburdened enterprise IT and SecOps teams. This idea of making sound security practices easier to implement is a big part of our strategy. Today were announcing several new technologies and programs that build on our unique cloud and intelligence capabilities to make it easier for enterprises to secure their assets from the cloud to the edge.

The first step in protecting people and data from todays dynamic threat landscape is accepting reality. Its time for us as an industry to recognize that the cloud holds so much promise for helping us solve security problems that we should consider the use of cloud-based intelligence a security imperativenot just for workloads deployed in the cloud, but also for improving security of endpoints.

We recently released the 23rd edition of our Security Intelligence Report. The trends it uncovers helps us see why the cloud is becoming a security imperative. Threats are increasingly automated and destructive. No one organization can amass the resources and intelligence to defend against these fast-moving threats. We have to tap into the power of the cloud, and of artificial intelligence, in order to muster the defenses required.

One of the most powerful examples of cloud-based artificial intelligence accelerating Microsofts own security innovation is the Microsoft Intelligent Security Graph. Our Intelligent Security Graph uses advanced analytics to link threat intelligence and security signals from Microsoft and partners and continues to increase in variety and volume of signal. For example, we see the threat landscape through the lens of the 18 billion web pages that Bing scans, the 400 billion emails that are analyzed for spam and malware, and the 5 billion distinct malicious threats that Windows Defender ATP protects our customers against each month.

Artificial intelligence gets better as it is trained with more signal from more diverse sources. Today, we are announcing the preview of a new unified security API in the Microsoft Graph, which allows our technology partners to easily integrate with Microsoft solutions and tap into the power of the Intelligent Security Graph.

The Intelligent Security Graph comes to life through our platform investments, where it connects our security solutions to improve protection, detection, and response. Microsoft invests more than $1 billion in cybersecurity R&D annually, to build new security innovations into Windows, Azure, and Microsoft 365. Today we are announcing new capabilities to help our customers improve their protection against threats and, when attacked, detect and respond more quickly. We are working with partners across the industry to better integrate solutions for our customers.

Improving protection

A fundamental concern for many IT teams is the struggle to know the true security posture of the organization: are all the necessary controls in place? Have all updates been applied? Is everything configured correctly? More importantly, its hard to know what the next steps should be to improve security. Today we are announcing the availability of Microsoft Secure Score, which gives the IT administrator a combined view of security readiness across a broad swath of the digital estatefrom Office 365 services to endpoint devices.

To get around properly configured protection, attackers often focus on deceiving end users with phishing and social engineering techniques. We have made a number of advances in our Office 365 ATP anti-phishing protection recently, and now we are adding Attack Simulator for Office 365 Threat Intelligence in Microsoft 365, so IT teams can train users to guard against phishing.

Information is the beating heart of any company, and the target of most attacks. It’s also a regulatory focus, especially with the new EU GDPR enforcement date rapidly approaching. In February, we announced a set of Microsoft 365 updates to help our customers manage compliance and protect information. As we near the GDPR enforcement date, today we are announcing several new tools and capabilities that help you respond to GDPR obligations with the Microsoft Cloud. Read more about them later today on the Office 365 blog.

Speeding up detection and response

Of course, no protection strategy can be 100% effective. Savvy customers are improving their detection and response capabilities to prepare for the inevitable breach. The Conditional Access capability built into Microsoft 365 has helped many of our customers dramatically improve their protection for tens of millions of employees, by assessing the risk of each request for access to a system, an application, or data, in real time. That risk level informs how much access is granted, according to policy set by IT.

We are extending Conditional Access to factor in post-breach response. New conditions based on continual assessment of endpoint healthnot just a one-time check of configurationenable our customers to restrict or deny access to resources if the device from which the request originates has been compromised by an attack. This new capability is in preview and will be generally available in the next Windows 10 update. Rapid detection and recovery remain out of reach for many of our customers because the specialized skills required to hunt down and eliminate adversaries are in high demand but short supply. To help IT focus its strained resources on the most important issues, we are announcing the general availability of automated remediation as part of Windows Defender ATP in the next Windows 10 update. With this new capability, Windows Defender ATP can automatically apply common remediations, freeing up the experts to work on more difficult recovery tasks.

Our work on detection and response extends to Microsoft Azure as well. As our customers embrace the cloud, Azure Security Center is a key tool that helps them simplify hybrid cloud security and keep pace with ever-evolving threats. Several new capabilities will be available with Security Center this week that help to identify and mitigate vulnerabilities proactively and detect new threats quickly. With the integration of Windows Defender ATP in preview, customers can get all the benefits of advanced threat protection for Windows servers in Azure Security Center.

Working across the industry

Customers who use Microsoft 365 have been taking advantage of increasingly robust tools to protect Office documents and e-mails wherever they go inside and outside the organization. Today we are extending these capabilities to our technology partners with the release of the Azure Information Protection SDK.

The benefits we can all gain from applying cloud intelligence to security problems are tremendous, but can only be fully realized if we work together across the industry. Nearly every customer I speak to has a dozen or more different security solutions in place. Each of those solutions plays a critical role in protecting the organizationand each has valuable contextual information that would help make the others more effective at protecting customers. Today we are announcing the Microsoft Intelligent Security Association, a group of technology providers who have integrated their solutions with Microsoft products to provide customers better protection, detection, and response. Anomali, Check Point, Forcepoint, Palo Alto Networks, and Ziften are among the solution providers working with us. Together, we can bring more signals from more sources to bearwhich helps our customers detect and respond to threats faster.

We also continue to work with a broad coalition of technology partners in the FIDO Alliance to address one of the most fundamental issues in security today: Identity and access management. Our analysis indicates that cloud-based user account attacks are up more than 300% over the past year. Passwords are the weakest link, and they are a source of frustration for users. Today we are announcing an important step in our work to lead the industry toward a future without passwords: support for the FIDO 2.0 standard in the next Windows 10 update. Millions of Windows 10 users already have the ability to sign in to Windows without a password using Windows Hello making authentication stronger and easier. With FIDO 2.0 support, users can take that same password-free authentication experience to any Windows 10 device managed by their organization.

The evolution of the intelligent edge

At Microsoft, we believe the intelligent cloud and intelligent edge will shape the next phase of innovation. The rise of Internet of Things deployments amplifies security challenges, because many devices lack the tools to manage updates or detect and respond to attacks. Building on research done by Microsoft AI and Research, and on decades of Microsoft experience and expertise in silicon, software, and cloud security, today we are announcing the preview of Azure Sphere. Azure Sphere extends our reach to the outer regions of the intelligent edge, enabling us to serve and secure an entirely new category of devices — the billions of MCU powered devices that are built and deployed each year.

Its an exciting time to be working in security. We are joining forces with other security solution providers and using the cloud to our customers advantage. Together, we can turn the tide against attackers. We are at the RSA Conference this week, and looking forward to discussing these new capabilities with you. Visit Microsoft.com/RSA to learn where you can find us.

 

Categories: RSA Conference Tags:

Investing in the right innovation

April 10th, 2018 No comments

RSA is around the corner which means tens of thousands of people will descend on Moscone Center in San Francisco, CA. Hundreds of innovative young companies will look for customers, props, and capital (especially at the Early Stage Expo!). Venture capitalists will look for opportunities to invest and find the next $1B IPO. Larger companies may well search for IP to complement larger platforms. CISOs will show up looking for solutions to todays problems, with an eye toward tomorrows, and ask two key questions: What in this expo hall will help me better protect my company? And, what can I take OUT of my portfolio in exchange?

Considering this, I contacted several VC and tech sector colleagues to test an assertion in my most recent blog, which stated that perhaps the kind of innovation were likely to see at RSA can offer too much of a good thing when it comes to CISOs priorities. Is the market ready for all this innovation? Are there enough dollars available? Is the innovation meeting CISOs real needs?

Looking at the exhibitor list, and searching by core topic, its going to be exciting, yet challenging, to determine which companies are truly innovating and competitive in these crowded marketplaces. A quick look also tells us where most of the attention is, and where it isnt. The Analytics, Intelligence and Response, and Machine Learning categories turn up hundreds of companies, as expected due to all the financial support into, and buzz around, these fields. We should expect to see many claims of best-in-class cyber defense products. However, I suspect there is growing skepticism about vendors claims to have the best ML/AI-driven 0-day finder. I encourage vendors to be prepared to articulate the real true capabilities of the ML and AI engines that drive your solutions: By what standards can we evaluate the strength of algorithms and engines? Can they scale, integrate into, and play nice with a customers existing toolset? No doubt, ML and AI will continue to improve and become more central to security, but early innovations here have probably created what one contact called a swarm effect that has promoted the rise of duplicative technologies. Vendors should also be aware there are probably too many companies chasing too few CISO dollars, and there is bound to be consolidation. On the investor side, I suspect ML/AI fatigue is setting in. A few VCs have said theyre pretty much done putting money into this area until it shakes out.

Perhaps CISOs can nudge the security and investor communities into using ML and AI to develop more foundational preventative solutions. These might include more secure-by-design hardware and software architectures, self-aware and self-healing systems, and smart-configuration and smart-patching solutions. One CTO colleague relayed that hes seen excellent presentations and proposals on self-healing computational models and systems, but unfortunately few VC-funded companies are moving beyond research into development and commercialization, partly because so much attention is on APT-hunting shiny objects. Until the community is incentivized to move into these areas, the current assume breach detect-and-respond model will dominate how cybersecurity is practiced and commercialized.

As another example, look at blockchain and cryptocurrency, two leading-edge investment areas. Is commensurate work being done to update the underlying cryptographic algorithms and protocols that date back to 1982? Quantum-resilient crypto and homomorphic encryption technologies are areas that probably havent received the level of financial support they deserve, outside of DARPA or other government programs.

Getting back to CISOs priorities, the consistent theme was how to make the best use of people and existing tools:

  • Training: This CBT/CET Gartner market will reach $7.2B by 2019. We know that were facing a shortage of up to 2M qualified cyber professionals. Unfortunately, this years conference doesnt seem to reflect the market opportunity or interest in addressing such core challenges. I queried the Human Element and Professional Development topics in the RSA exhibitor list and turned up only 57 and 19 companies, respectively, with booth presence this year. I hope at least their booths are crowded and that they succeed. We need more innovation in people. Machines will have to do more and more of the work but in the end, people deploy, monitor, and interact with the technology that is protecting their systems. We must be more innovative in how we train people and encourage others to join the field. The better we can train personnel to more effectively monitor and improve the performance of their cyber systems, the more we can create a virtuous loop that combines trained people continuously optimizing the abilities of the machines that will be required to handle more of the configuration, deployment, monitoring, detection, and remediation workloads.
  • ROI: We need to invest more in tools that help CISOs use their existing tools better. One VC colleague pointed to a recent investment his firm made in a company whose solution measures the effectiveness of third-party security tool implementation. Whos watching the watchers? IMHO, a very clever example of the type of virtuous cyber loop we could create. Another VC contact uses the analogy of the industry delivering too many cyber drugs to treat the same symptoms; what his firm wants to see is investment in more doctors and nurses to more effectively administer the treatment, get to root cause, and save the patient.

I support many public sector CISO teams in the US and Europe. What do I think theyll be looking for at RSA? With an eye on ML/AI innovation, I think theyll be just as interested in tools that offer improvements to the messy hygiene work of security: automated and self-learning configuration, inventory analysis and update management tools, and for anything that helps their people improve how they manage their responsibilities. Given uncertain budgeting and the continual need to maintain and adhere to compliance mandates, theyll also look for solutions that help improve and speed up the path to staying as green as possible on a scorecard. Perhaps the excitement around advanced sciences and big data will dominate the RSA agenda, but I expect and encourage CISOs to push innovators for solutions that get to the core of their day to day challenges.

If youre an investor, or if youre an innovator looking for what could be next years breakout opportunity, think about investing in the people who will deliver on your goals.

Categories: Uncategorized Tags:

Announcing: new British Standard for cyber risk and resilience

April 4th, 2018 No comments

Technology is an integral part of the fabric of everyday life. There is almost no organization that does not rely on digital services in some way in order to survive. The opportunity that technology provides also brings with it more vulnerabilities and threats as organizations and data become more connected and available. This trend results in a common gap found in the decision-making process at large organizations. Often information security and cybersecurity have been viewed as a function of IT and therefore, the information security departments have been managed outside of normal business decision-making processes. This is an approach we no longer have the luxury of indulging.

Organizations need a holistic approach to implement digital transformation projects to safeguard their security. This involves focusing on both the opportunity and the threat of any change. To do this effectively the accountability for cyber risk and resilience needs to sit firmly with executive management and the governing body. However, a skills gap exists at this level with many governing body members having started their careers before the internet era. Even when willing to adopt responsibility for building a cyber resilient organization, senior executives are often confused by the technical language that risk management and cybersecurity professionals speak. As well, they may also encourage cybersecurity professionals to speak directly to the board. Therefore, we also need to equip board members with the tools to ask the right questions and ensure the correct levels of risk to build cyber resilient organizations.

That is why, nearly two years ago, the BSI Risk Management Committee started working to develop new guidance aimed at helping executive leadership better understand and manage the technology risks to their organizations. I was asked to lead a group of government executives, regulators, professional bodies and technical experts with a goal of directly addressing the realities and challenges of managing cyber risk in a digital world. This goal led us to draft the new British Standard BS31111. The standard aims to provide guidance to enterprise organizations regarding cyber risk and resilience, and to address the gap in IT decision making.

The standard includes:

  1. Parameters to build concrete guidelines into governing bodies
  2. Identification of areas of focus an organization should have in order to build a cyber resilient enterprise
  3. Assessment questions management can ask to challenge the organization regarding how it is building cyber resilience into the business

Cyber risk and resilience needs to be driven from the top of the organization to ensure that the right culture is set across all business decision making. Executive management must ensure that there is a clear risk and resilience strategy set across the organization, as well as ensuring that there is a strong management structure in place that details the responsibilities and expectations of everyone to maintain security. As Microsofts own CEO Satya Nadella has said, Cybersecurity is like going to the gym. You cant get better by watching others, youve got to get there every day. Satyas comments underline the reasoning behind this standard, emphasizing the need to build cyber resilience into day to day operations and not treat it as a standalone project or program.

Engaging with risk management and cyber resilience principles can be complicated and it is easy to get bogged down by technical jargon. To help, we created a visual (figure 1) intended to illustrate the areas required to develop cyber resilience and the key responsibilities of the board.

Source:BS3111:2018 Figure 1

Key tenets:

  • The responsibility of any Board of Directors is to clearly set the direction of business activity. They ultimately sign off on major decisions and investments and need to ensure that activity is sustainable for the business.
  • Executive management and the governing body are mostly responsible for the roof and foundation, with oversight on the activity of the pillars. Any building is only as good as its foundation and the same is true for building cyber resilience.

The importance of culture for security

Without a strong culture of security, it is easy for decisions to be made that expose an organization. Many of the major breaches witnessed in recent years can be traced back to a lack of ownership and leadership regarding the need for strong cybersecurity measures across the organization, along with ill-informed investment decisions. The executive management and members of the board need to clearly focus on the benefits of any digital investment AND the level of security outcomes required to support that investment. Hopefully, the new British Standard BS31111 will provide best practice aims and expectations for the responsibility and accountability of boards and executive leadership to drive change.

The publication of the standard is only the first step. It will be important to promote the need for every organization to safeguard their enterprise and their customers, more than we do today. Many boards and governing bodies are becoming more cyber aware and understanding their need to build cyber risk into their decision making. This publication aims to enable leadership teams and boards to build awareness and decision-making protocols across the organization.

In my short tenure with Microsoft, I have already witnessed a strong internal security culture, focused on building resilient and secure cloud platforms. I look forward to working with my customers to help them develop their own cyber resilient foundations and cultures, ensuring that Microsofts capabilities support them in that endeavor.


Sin serves as Executive Security Advisor for the UK at Microsoft and has worked in the Information Security industry for over 20 years. Sin is a highly requested public speaker and has regularly been on national radio and television including the BBC and Sky News talking about security issues. Sin was appointed an MBE by the Queen in the New Years Honours List for 2018 for services to Cyber Security.

Categories: Uncategorized Tags:

Working towards a more diverse future in security

March 28th, 2018 No comments

Last year I embarked on an exercise to examine diversity in cybersecurity. As one full year has passed, I decided to revisit this topic and the ongoing challenges of recruiting AND retaining diverse talent in the cybersecurity field. This past year saw the #MeToo movement in the spotlight, and while womens issues were brought to the forefront, there are still opportunities to improve. I want to share new learnings based on my experiences this year and as an update to my earlier post, How to solve the diversity problem in security.

Two personal interactions that are top of mind reinforced my belief there is much work to be done. If you follow me on Twitter (@ajohnsocyber) I commented on both at the time they occurred. In one instance, I was interviewing a candidate for a role in my organization. We were discussing MFA, and he felt the need to stop me, educate and inform me of the error of my thinking. I dont claim to be a subject matter expert about all topics related to cybersecurityno one could bebut I know a fair bit about MFA. His dismissive tone and attitude certainly did not set the right tone of an interview. The second incident occurred whilst I was presenting to a large group of customers. A male colleague interrupted me to say, What she meant to say was. Actually, what I meant to say was exactly what I said but thank you for that moment of classic mansplaining. You see, no matter your rank, role, position or expertise, there are still those who choose to minimize your knowledge, expertise or experience. While I cannot definitively say these two incidents occurred because I am a woman, I can tell you the candidate feedback from male interviewees was not the same, and the man in question did not interrupt male speakers at the same event where he interrupted me.

So, as I revisit this blog post for 2018, I also want to highlight some really positive events of the past 12 months. Microsoft believes in diversity 365 days a year, and we demonstrate it with solid actions. I am inspired not only by the women leaders in our organization, but also by our strong male allies who advocate for recruiting and promoting diverse talent. We simply cannot accomplish this work without the support of male allies. I am fortunate, at Microsoft, to actively and frequently work with a large group of well-known security professionals including many talented women. I look forward to meeting and working with many more who are surely part of this company now or who will be compelled to join. We continue to invest in talent that challenges the way we think, talent that changes the organization, talent that truly embraces the learn it all, not know it all culture our CEO Satya Nadella has built.

So, whilst as an industry we have a long road ahead of us to fully embrace diversity, we have planted the seeds. In my thirty years in tech, I have never felt the energy or seen this level of commitment and passion toward inclusion. I am proud to be part of the solution and fully committed to helping steer the ship.

Categories: Uncategorized Tags:

Filling the gaps in international law is essential to making cyberspace a safer place

March 27th, 2018 No comments

A month ago, on the sidelines of the Munich Security Conference, Microsoft organized an expert workshop to discuss gaps in international law as it applies to cyberspace. We were fortunate enough to bring together twenty leading stakeholders, including international legal experts, United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (UNGGE) delegates, diplomats, and non-governmental organizations (NGOs). Together, we looked at the current situation in cybersecurity norms and international law, and we discussed possible paths forward. What emerged was a significant consensus on both the need to restructure cybersecurity discussions globally and the necessity of implementing the 2015 UNGGE report.

Gaps in international law were the focus for discussion and, although there were several areas of concern that were identified on the basis of recent cyberattacks, the most significant challenge was seen as being structural: the lack of an international organization or other venue for addressing the cyber threat landscape of today and tomorrow.

The challenge of the cyber threat landscape is not simply that it is always evolving, nor that it is continually extending its reach into the day-to-day existence of citizens, businesses, and governments. The greatest challenge is that when it comes to dealing with cyber threats the world currently lacks:

  • A place where victims of nation-state or state-sponsored cyberattacks are able to go to get help after an incident has occurred;
  • A standing body or registry that enables ongoing learning about the known threats to people and infrastructure, as well as their corresponding responses;
  • A common basis for judging not just if international law has been violated but how;
  • A consistent basis for the use of international law in prevention of cyberattacks and for enforcement of law following such attacks.

In other words, the world lacks a common space for finding out the facts about cyberattacks, for learning from others, for interpreting laws and for agreeing who did what to whom. That last point, the attribution of responsibility for cyberattacks, fundamentally underpins the concept of applying international law to cyberspace: if we cannot know who is responsible for a cyberattack we cannot hold them to account.

It may be unrealistic to expect a single silver bullet organization for all aspects of the problem. Nonetheless, there were many at the workshop and, indeed, across the Munich Security Conference who agreed in broad terms that not having some kind of international, non-governmental platform focused on cyberspace (enabling best practice, exchanging information, examining the forensics around the attacks) will undermine future efforts to protect civilians in cyberspace.

Certainly, there are other things that also need to be done to protect civilians and civilian infrastructure from cyberattack by states. Rolling out the 2015 UNGGEs proposed norms of state behavior is one such thing because it will help governments manage the real politick of holding each other to account. The recent case of Sergei Skripal shows that even when there is a will to act, the options for constraining a sovereign state are comparatively limited. Even an incremental improvement in state behavior in cyberspace through applying the 2015 UNGGE suggestions would be a positive step, therefore. After all, today states are choosing not to invoke international law following cyberattacks, perhaps because there is uncertainty about those laws or perhaps because there is a belief that doing so will neither prevent future attacks nor result in any kind of remediation.

The workshop was a very valuable opportunity for Microsoft, and for me personally. By bringing governments, civil society, technical experts and business people together, it fostered exactly the kind of multi-stakeholder discussion that the future of cyberspace depends upon. The outputs of that discussion, especially the general view that a non-governmental international organization is needed, are something that my colleagues and I will certainly look to build on in the coming months. Furthermore, I am hopeful that such an organization will emerge, with time, and that there will be a genuine interest and impetus amongst the public and private sectors to use it. If they do so, they will help to make international law stronger in cyberspace, even in the face of state-sponsored cyberattacks. If that happens then the world will have taken an important step towards making cyberspace a safer and more stable place.

Categories: Uncategorized Tags:

The role that regions can and should play in critical infrastructure protection

March 5th, 2018 No comments

Todays report, Critical Infrastructure Protection in Latin America and the Caribbean 2018, developed in partnership between Microsoft and the Organization of American States (OAS), demonstrates the value of regional cooperation in global efforts to increase the security of the online environment where it matters most. It acknowledges that rather than focusing on all politics is local or living in a global village, regions have a key role to play in formulating policies and delivering outcomes for cybersecurity in general, and critical infrastructure protection (CIP) in particular.

Glocalization, a buzz phrase from the turn of the millennium, seems well suited to cybersecurity, given the Internets simultaneously global reach and local impact. This duality is important to keep in mind when considering the fact that protecting increasingly connected critical infrastructure is a challenge for nations all over the world, and it poses the question of whether the same solutions can be applied across the varied landscapes in which we operate. Regional elements are important in that context, as they provide us with an opportunity to investigate whether the solutions to global cybersecurity challenges need to be tailored to a particular context to be effective, whilst at the same time allowing us to retain a level of scale.

The latter comes about, as even allowing for the global nature of the online environment, we need to recognize that culture, geography, as well as economic relations and trade, are likely to result in a greater level of interconnectivity between neighboring states than far-flung places on opposite sides of the world. In the world of CIP, this means we are more likely to see the same provider operate across two countries in the same region, the same threat actor target linguistically-linked entities, and the consequences of the same cyber-attacks spill across borders.

Close communication and information sharing amongst and between the different regional stakeholders involved in CIP is therefore even more important. This report makes it clear that policymaking in the age of the Internet needs governments working alongside private industry to deliver effective results, leveraging the respective expertise and capabilities of the two groups. But it also reminds us that regional dialogue as well as bilateral discussions between neighboring states, and even between private sector operators in adjacent jurisdictions, helps protect us all.

The need for increased communication and new regional partnerships are only a few of the recommendations that the report puts forward. It also issues a call for risk management to be placed at the center of any CIP initiative, as well as for a move from cybersecurity towards cyber resilience. Moreover, and particularly relevant to the region of Latin America and the Caribbean, the report encourages a holistic approach to CIP at the national level, with governments urged to put forward cybersecurity frameworks, guidelines, and baselines for operators that are outcomes-focused and can withstand the quick pace of technological evolution. Similarly, the report recognizes the need to ensure a clear division of responsibilities in cybersecurity, and a dedicated effort to foster trust between the different entities and stakeholders that must be involved in protecting critical infrastructure.

The examples of global best practices that the report lays out will be recognizable to anyone with experience in the sector. Yet, the report goes a step further by placing these familiar practices in a regional context through the results of an innovative survey of CIP stakeholders across the region. At the global level, we might take for granted the logic behind why we engage in multi-stakeholder dialogue, or why a clear division of responsibilities is so important in modern technology. The survey shows that even in a region where very few CIP frameworks exist, public-private partnerships, within and across countries, have begun to emerge organically and are valued.

At the same time, the survey helps reinforce how much is still to be done on cybersecurity globally. To highlight just one example, almost half of the over 500 respondents, who are trying to protect the most vital national assets in Latin America and the Caribbean, have not yet endorsed risk management. How can the private sector and governments with advanced risk management capabilities best support capacity building in regions of the world trying to protect the infrastructure underpinning their societies, governments, and economies? I believe that this report is the beginning of a dialogue and roadmap for risk reduction.

Categories: Uncategorized Tags:

Best practices for securely moving workloads to Microsoft Azure

February 26th, 2018 No comments

Azure is Microsofts cloud computing environment. It offers customers three primary service delivery models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Adopting cloud technologies requires a shared responsibility model for security, with Microsoft responsible for certain controls and the customer others, depending on the service delivery model chosen. To ensure that a customers cloud workloads are protected, it is important that they carefully consider and implement the appropriate architecture and enable the right set of configuration settings.

Microsoft has developed a set of Azure security guidelines and best practices for our customers to follow. These guides can be found in theAzure security best practices and patterns documentation. In addition, were excited to announce the availability of the Center for Internet Securitys (CIS) Microsoft Azure Foundations Security Benchmark, developed in partnership with Microsoft. CIS is a non-profit entity focused on developing global standards and recognized best practices for securing IT systems and data against the most pervasive attacks.

The CIS Microsoft Azure Foundations Security Benchmark provides prescriptive guidance for establishing a secure baseline configuration for Microsoft Azure. Its scope is designed to assist organizations in establishing the foundation level of security for anyone adopting the Microsoft Azure cloud. The benchmark should not be considered as an exhaustive list of all possible security configurations and architecture but as a starting point. Each organization must still evaluate their specific situation, workloads, and compliance requirements and tailor their environment accordingly.

The CIS benchmark contains two levels, each with slightly different technical specifications:

  • Level 1 Recommended, minimum security settings that should be configured on any system and should cause little or no interruption of service or reduced functionality.
  • Level 2 Recommended security settings for highly secure environments and could result in some reduced functionality.

The CIS Microsoft Azure Foundations Security Benchmark is divided into the following sections:

Section

Description

No. of Rec. Controls

Identity and Access Management

Recommendations related to setting the appropriate identity and access management policies.

23

Azure Security Center

Recommendations related to the configuration and use of Azure Security Center.

19

Storage Accounts

Recommendations for setting storage account policies.

7

Azure SQL Services

Recommendations for securing Azure SQL Servers.

8

Azure SQL Databases

Recommendations for securing Azure SQL Databases.

8

Logging and Monitoring

Recommendations for setting logging and monitoring policies on your Azure subscriptions.

13

Networking

Recommendations for securely configuring Azure networking settings and policies.

5

Virtual Machines

Recommendations for setting security policies for Azure compute services, specifically virtual machines.

6

Other Security Considerations

Recommendations regarding general security and operational controls, including those related to Azure Key Vault and Resource Locks.

3

Total Recommendations

92

 

Each recommendation contains several sections, including a recommendation identification number, title, and description; level or profile applicability; rationale; instructions for auditing the control; remediation steps; impact of implementing the control; default value; and references. As an example, the first control contained in the benchmark is under the Identity and Access Management section and is titled: 1.1 Ensure that multi-factor authentication is enabled for all privileged users (Scored). A control is marked as Scored or Not Scored based on whether it can be programmatically tested. In this case, recommendation 1.1 can be audited leveraging the Microsoft Graph and PowerShell commandlet. The specific steps for auditing the control are contained in the “Audit” section for this specific recommendation. This recommendation is listed as a Level 1 control because it is only applied to Azure administrative users and would not have a company-wide impact or produce less functionality for users. The rationale for recommendation 1.1 is that Azure administrative accounts need to be protected due to their powerful privileges, and with multiple factors for authentication, an attacker would need to compromise at least two different authentication mechanisms, increasing the difficulty of compromise and thus reducing the risk to the Azure tenant.

The benchmark is freely available in PDF format on the CIS website.

You can also find more information on Azure Security Center and on Azure Active Directory. Both are critical solutions to securely deploying and monitoring Azure workloads and are covered in the new CIS benchmark.

Categories: Uncategorized Tags:

How to mitigate rapid cyberattacks such as Petya and WannaCrypt

February 21st, 2018 No comments

In the first blog post of this 3-part series, we introduced what rapid cyberattacks are and illustrated how rapid cyberattacks are different in terms of execution and outcome. In the second blog post, we provided some details on Petya and how it worked. In this final blog post, we will share:

  • Microsofts roadmap of recommendations to mitigate rapid cyberattacks.
  • Outside-in perspectives on rapid cyberattacks and mitigation methods based on a survey of global organizations.

Because of how critical security hygiene issues have become and how challenging it is for organizations to follow the guidance and the multiple recommended practices, Microsoft is taking a fresh approach to solving them. Microsoft is working actively with NIST, the Center for Internet Security (CIS), DHS NCCIC (formerly US-CERT), industry partners, and the cybersecurity community to jointly develop and publish practical guides on critical hygiene and to implement reference solutions starting with these recommendations on rapid cyberattacks as related to patch management.

Roadmap of prescriptive recommendations for mitigating rapid cyberattacks

We group our mitigation recommendations into four categories based on the effect they have on mitigating risk:

EXPLOIT MITIGATION
Mitigate software vulnerabilities that allow worms and attackers to enter and/or traverse an environment

BUSINESS CONTINUITY / DISASTER RECOVERY (BC/DR)
Rapidly resume business operations after a destructive attack

LATERAL TRAVERSAL / SECURING PRIVILEGED ACCESS
Mitigate ability to traverse (spread) using impersonation and credential theft attacks

ATTACK SURFACE REDUCTION
Reduce critical risk factors across all attack stages (prepare, enter, traverse, execute)

Figure 1: Key components of mitigation strategy for rapid cyberattacks

We recognize every organization has unique challenges and investments in cybersecurity (people and technology) and cannot possibly make every single recommendation a top nor immediate priority. Accordingly, we have broken down the primary (default) recommendations for mitigating rapid cyberattacks into three buckets:

  1. Quick wins: what we recommend organizations accomplish in the first 30 days
  2. Less than 90 days: what we recommend organizations accomplish in the medium term
  3. Next quarter and beyond: what we recommend organizations accomplish in the longer term

The following list is our primary recommendations on how to mitigate these attacks.

Figure 2: Microsofts primary recommendations for mitigating rapid cyberattacks

This list has been carefully prioritized based on Microsofts direct experience investigating (and helping organizations recover from) these attacks as well as collaboration with numerous industry experts. This is a default set of recommendations and should be tailored to each enterprise based on defenses already in place. You can read more about the details of each recommendation in the slide text and notes of the published slide deck.

In prioritizing the quick wins for the first 30 days, the primary considerations we used are:

  1. Whether the measure directly mitigates a key attack component.
  2. Whether most enterprises could rapidly implement the mitigation (configure, enable, deploy) without significant impact on existing user experiences and business processes.

Figure 3: Mapping each recommendation into the mitigation strategy components

In addition to the primary recommendations, Microsoft has an additional set of recommendations that could provide significant benefits depending on circumstances of the organization:

  1. Ensure outsourcing contracts and SLAs are compatible with rapid security response
  2. Move critical workloads to SaaS and PaaS as you are able
  3. Validate existing network controls (internet ingress, internal Lab/ICS/SCADA isolation)
  4. Enable UEFI Secure Boot
  5. Complete SPA roadmap Phase 2
  6. Protect backup and deployment systems from rapid destruction
  7. Restrict inbound peer traffic on all workstations
  8. Use application whitelisting
  9. Remove local administrator privileges from end-users
  10. Implement modern threat detection and automated response solutions
  11. Disable unneeded protocols
  12. Replace insecure protocols with secure equivalents (TelnetSSH, HTTP HTTPS, etc.)

There are specific reasons why these 12 recommendations, although helpful for certain organizations/circumstances, were excluded from the list of primary recommendations. You can read about those reasons in the slide notes of the published slide deckif interested.

Outside-in perspectives on rapid cyberattacks and mitigation methods

In late November 2017 Microsoft hosted a webinar on this topic and solicited feedback from the attendees which comprised of 845 IT professionals from small organizations to large global enterprises. Here are a few interesting insights from the poll questions.

Rapid cyberattack experience

When asked if they had experienced a rapid cyberattack (e.g. WannaCrypt, Petya or other), ~38% stated they did.

Awareness of SPA roadmap

When asked if theyre aware of Microsofts Securing Privileged Access (SPA) roadmap, most, 66%, stated that they were not.

Patching systems

When we asked within how many days (<7 or 30 or 90) they can patch various systems, it seems most respondents believed their team is good at patching quickly:

  • 83% can patch workstations within 30 days; 44% within 7 days
  • 81% can patch servers within 30 days; 51% within 7 days
  • 54% can patch Linux/Other devices within 30 days; 25% within 7 days

Removal of SMBv1

When asked where they are on the path towards removing SMBv1, 26% said they have completed removing it, another 21% said they are in progress or in the process of doing so, and ~18% more are planning to do so.

Adopting roadmap recommendations

When asked what is blocking them from adopting Microsofts roadmap recommendations for securing against rapid cyberattacks, the top three reasons respondents shared are:

  1. Lack of time
  2. Lack of resources
  3. Lack of support from upper management/executive buy-in

To help organizations overcome these challenges, Microsoft can be engaged to:

  • Assist with implementing the mitigations described in SPA Roadmap and Rapid Cyberattack Guidance.
  • Investigate an active incident with enterprise-wide malware hunting, analysis, and reverse engineering techniques. This includes providing tailored cyberthreat intelligence and strategic guidance to harden the environment against advanced and persistent attacks. Microsoft can provide onsite teams and remote support to help you investigate suspicious events, detect malicious attacks, and respond to security breaches.
  • Proactively hunt for persistent adversaries in your environment using similar methods as an active incident response (above).
    Contact your Microsoft Technical Account Manager (TAM) or Account Executive to learn more about how to engage Microsoft for incident response.

Contact your Microsoft Technical Account Manager (TAM) or Account Executive to learn more about how to engage Microsoft for incident response.

More information

We hope you found the 3-part blog series on the topic of rapid cyberattacks and some recommendations on how to mitigate them useful.

For more information and resources on rapid cyber attacks, please visit the additional links here:

On-demand webinar Protect Against Rapid Cyberattacks (Petya, WannaCrypt, and similar).

Additional resources

Tips to mitigate known rapid cyberattacks with Windows 10 (and Windows Defender Advanced Threat Protection):

Mitigate backup destruction by ransomware with Azure Backup security features

Detect leaked credentials in Azure Active Directory

Rapidly detect polymorphic and emerging threats and enable advanced protection with Windows Defender Antivirus cloud protection service (formerly Microsoft Active Protection Service (MAPS))

Apply network protection with Windows Defender Exploit Guard

Safeguard integrity of privileged accounts that administer and manage IT systems by considering Securing Privileged Access (SPA) roadmap

Mitigate risk of lateral escalation and Pass-the-Hash (PtH) credential replay attack with Local Admin Password Solution (LAPS)

Mitigate exploitation of SMBv1 vulnerability via Petya or other rapid cyberattack by following guidance on disabling SMBv1

 

Categories: Uncategorized Tags:

How a national cybersecurity agency can help avoid a national cybersecurity quagmire

February 19th, 2018 No comments

This last October we saw more countries than ever participate in initiatives to raise cybersecurity awareness. What was once largely a US approach has evolved into events and initiatives around the world by governments, civil society groups, and private sector partners. This increased breadth and depth of activity reflects governments increased understanding of the importance of cybersecurity, not only for their operations but for the lives of their citizens. My teams research indicates that today over half of the worlds countries are leading some sort of national level initiative for cybersecurity, with countless other efforts at sectoral, state, city, or other levels.

However, developing effective approaches to tackling cybersecurity at a national level isnt easy, especially if they are going to have widespread or long-lasting effects. The complexity of developing approaches for an issue that truly touches all aspects of the modern economy and society cannot be understated and if approached in the wrong way can create a quagmire of laws, bodies, and processes. The different aspects of cybersecurity such as promoting online safety, workforce skills development, and critical infrastructure protection, all cut across an unprecedented range of traditional government departments, from defense and foreign affairs, to education and finance. Effectively, cybersecurity is one of the first policy areas that challenges traditional national governance structures and policy making. It is unlikely to be the last, with issues such as artificial intelligence hard on its heels.

To deal with this challenge, governments are exploring new governance models. Some countries have created a dedicated department within a particular ministry, such as India. Others have looked at extending the work traditionally done by the police or a national computer security incident response team, such as Malaysia. Moreover, countries as diverse as Australia, France, Brazil, Indonesia, Tanzania, Belarus, Israel, and Singapore, already have specific bodies of government responsible for cybersecurity.

However, despite the fact that many countries have already taken steps to establish or strengthen their own cybersecurity bodies; no single, optimum, model can be pointed to. The reasons are many, from different governance set ups, to varying levels of investment and expertise available, to the fact that dealing with cybersecurity is a relatively new endeavor for governments.

Taking this variety into account, and coupling it with our own perspective and experience, Microsoft has collected good practices that we believe can support national engagement on cybersecurity. Today we are releasing a new whitepaper: Building an Effective National Cybersecurity Agency. Its core insights center around the following set of recommendations for governments in order to avoid becoming bogged down in cybersecurity challenges that are otherwise avoidable:

  1. Appoint a single national cybersecurity agency.Having a single authority creates a focal point for key functions across the government, which ensures policies are prioritized and harmonized across the nation.
  2. Provide the national cybersecurity agency with a clear mandate. Cybersecurity spans different stakeholders with overlapping priorities. Having a clear mandate for the agency will help set expectations for the roles and responsibilities and facilitate the intra-governmental processes.
  3. Ensure the national cybersecurity agency has appropriate statutory powers. Currently, most national cybersecurity agencies are established not by statute but by delegating existing powers from other parts of government. As cybersecurity becomes an issue for national legislature, agencies might have to be given clear ownership of implementation.
  4. Implement a five-part organizational structure. The five-part structure we propose in the paper allows for a multifaceted interaction across internal government and regulatory stakeholders, as well as external and international stakeholders, and aims to tackle both regulatory and other cybersecurity aspects.
  5. Expect to evolve and adapt. Regardless of how the structure of the national cybersecurity agency begins, the unavoidability of change in the technology and threat landscape will require it to evolve and adapt over time to be able to continue to fulfill its mandate.

As the challenges and opportunities that come as a result of ICT proliferation continue to evolve, governments will need to ensure they are sufficiently equipped to face them, both today and in the future. Bringing together diverse stakeholders across different agencies, such as defense, commerce, and foreign affairs, and backgrounds, including those from law, engineering, economics, ad policy, will enable our society to both deal with the threats and harness the opportunities of cyberspace. It is this diversity of stakeholders that contributes to the challenge cybersecurity poses for traditional governance.

But cybersecurity is the first of many emerging areas that necessitates new and creative solutions that allows policymakers to work hand in hand with their counterparts across government, civil society and industry. For cybersecurity, as well as the issues to come, cooperation is the underpinning of achieving these goals. However, cooperation cannot be created organically, it must grow from an effectively structured governance system. Establishing a national cybersecurity agency will enable governments to do just that.

Categories: Uncategorized Tags:

Developing an effective cyber strategy

February 7th, 2018 No comments

The word strategy has its origins in the Roman Empire and was used to describe the leading of troops in battle. From a military perspective, strategy is a top-level plan designed to achieve one or more high-order goals. A clear strategy is especially important in times of uncertainty as it provides a framework for those involved in executing the strategy to make the decisions needed for success.

In a corporate or government entity, the primary role of the Chief Information Security Officer (CISO) is to establish a clear cybersecurity strategy and oversee its execution. To establish an effective strategy, one must first understand, and it is recommended to document, the following:

  • Resources. The most critical component of a successful strategy is the proper utilization of available resources. As such, a CISO must have a clear picture of their annual budget, including operating and capital expenditures. In addition, the CISO must understand not just the number of vendors and full-time employees under their span of control, but also the capabilities and weaknesses of these resources. The CISO must also have an appreciation for the capabilities of key resources that are essential to effective security but not necessarily under their direct supervision, such as server and desktop administrators, the team responsible for patching, etc. One of the most difficult aspects of the CISO job is that to be successful you must positively influence the actions of other teams whose jobs are critical to the success of the security program, and your career, but who are not under your direct control.
  • Business Drivers. At the end of the day, CISOs have a finite amount of resources to achieve goals and cannot apply the same level of protection to all digital assets. To help make resource allocation decisions, the CISO must clearly understand the business they have been charged with protecting. What is most important to the success of the business? Which lines of business produce the most revenue, and which digital assets are associated with those lines? For governments, which services are essential for residents’ health and for maintaining government operations, and which digital assets are associated with those services and functions?
  • Data. Data is the lifeblood of most companies and is often the target of cyber criminals, whether to steal or encrypt for ransom. Once business drivers have been identified, the CISO should inventory the data that is important to the lines of business. This should include documenting the format, volume, and locations of the data and the associated data steward. In large organizations, this can be extremely challenging, but it is essential to have a clear picture of the storage and processing of the entitys crown jewels.
  • Controls. Before formulating a strategy, the CISO must gain an understanding of the status of the safeguards or countermeasures that have been deployed within an environment to minimize the security risks posed to digital assets. These will include controls to minimize risks to the confidentiality, integrity, or availability of the assets. In determining the sufficiency of a control, assess its design and operating effectiveness. Does the control cover all assets or a subset? Is the control effective at reducing the risk to an acceptable level or is the residual risk still high? For example, one control found to be effective in minimizing risk to the confidentiality of data is to require a second factor of authentication prior to granting access to sensitive records. If such a control is implemented, what percentage of users require a second authentication factor before accessing the companys most sensitive data? What is the likelihood that a user will acknowledge a second factor in error as the result of a phishing test?
  • Threats. Identifying the threats to an organization is one of the more difficult tasks in developing a cyber strategy, as cyber threats tend to be asymmetric and constantly evolving. Still, it is important to identify the most likely threat actors and the motivations, tactics, techniques, and procedures used to achieve their goals.

Once a CISO has a clear picture of the items discussed above, they can begin formulating a strategy appropriate to the task at hand. There is no one size fits all approach, as each organization is unique, but there are models and frameworks that have proven helpful over time, including those developed by the National Institute of Standards and Technology, Cyber Kill Chain, Center for Internet Security, SANS, and the Australian Signals Directorate, among others. An effective strategy must also consider human and organizational dynamics. For example, employees will typically work around a control that increases the actual, or perceived, amount of effort to perform a given task, especially when they feel that the effort is not commensurate with the threat being addressed.

At Microsoft, we are continuously evaluating the current threats faced by our customers and building products and services to help CISOs execute their strategies. The design of our products not only accounts for the techniques utilized by cyber attackers, but also incorporates features that address the human dynamics within an enterprise and the staff and retention challenges faced by security teams. A few examples of these design principles in practice include building security features and functions within our productivity tools such as Office 365 Advanced Threat Protection, using auto-classification to reduce the workload on end users with Azure Information Protection, and increasing the efficiency and effectiveness of security teams with Windows Defender Advanced Threat Protection.

Categories: Uncategorized Tags:

Overview of Petya, a rapid cyberattack

February 5th, 2018 No comments

In the first blog post of this 3-part series, we introduced what rapid cyberattacks are and illustrated how they are different in terms of execution and outcome. Next, we will go into some more details on the Petya (aka NotPetya) attack.

How Petya worked

The Petya attack chain is well understood, although a few small mysteries remain. Here are the four steps in the Petya kill chain:

Figure 1:How the Petya attack worked

  1. Prepare – The Petya attack began with a compromise of the MEDoc application. As organizations updated the application, the Petya code was initiated.
  2. Enter – When MEDoc customers installed the software update, the Petya code ran on an enterprise host and began to propagate in the enterprise.
  3. Traverse – The malware used two means to traverse:

    • Exploitation Exploited vulnerability in SMBv1 (MS17-010).
    • Credential theft Impersonated any currently logged on accounts (including service accounts).
    • Note that Petya only compromised accounts that were logged on with an active session (e.g. credentials loaded into LSASS memory).

  4. Execute – Petya would then reboot and start the encryption process. While the screen text claimed to be ransomware, this attack was clearly intended to wipe data as there was no technical provision in the malware to generate individual keys and register them with a central service (standard ransomware procedures to enable recovery).

Unknowns and Unique Characteristics of Petya:

Although it is unclear if Petya was intended to have as widespread an impact as it ended up having, it is likely that this attack was built by an advanced group, considering the following:

  • The Petya attack wiped the event logs on the system, which is unneeded as the drive was wiped later anyways. This leaves an open question on whether this is just standard anti-forensic practice (as is common for many advanced attack groups) or whether there were other attack actions/operations being covered up by Petya.
  • The supply chain approach taken by Petya requires a well-funded adversary with a high level of investment into attack skills/capability. Although supply chain attacks are rising, these still represent a small percentage of how attackers get into corporate environments and require a higher degree of sophistication to execute.

Petya and Traversal/Propagation

Our observation was that Petya spread more by using identity impersonation techniques than through MS17-010 vulnerability exploitation. This is likely because of the emergency patching initiatives organizations followed to deploy MS17-010 in response to the WannaCrypt attacks and associated publicity.

The Petya attacks also resurfaced a popular misconception about mitigating lateral traversal which comes up frequently in targeted data theft attacks. If a threat actor has acquired the credentials needed for lateral traversal, you can NOT block the attack by disabling execution methods like PowerShell or WMI. This is not a good choke point because legitimate remote management requires at least one process execution method to be enabled.

Figure 2:How the Petya attack spreads

Youll see in the illustration above that achieving traversal requires three technical phases:

1st phase: Targeting Identify which machines to attack/spread to next.

Petyas targeting mechanism was consistent with normal worm behavior. However, Petya did include a unique innovation where it acquired IPs to target from the DHCP subnet configuration from servers and DCs to accelerate its spread.

2nd phase: Privilege acquisition Gain the privileges required to compromise those remote machines.

A unique aspect of Petya is that it used automated credential theft and re-use to spread, in addition to the vulnerability exploitation. As mentioned earlier, most of the propagation in the attacks we investigated was due to the impersonation technique. This resulted in impersonation of the SYSTEM context (computer account) as well as any other accounts that were logged in to those systems (including service accounts, administrators, and standard users).

3rd phase: Process execution Obtain the means to launch the malware on the compromised machine.

This phase is not an area we recommend focusing defenses on because:

  1. An attacker (or worm) with legitimate credentials (or impersonated session) can easily use another available process execution method.
  2. Remote management by IT operations requires at least one process execution method to be available.

Because of this, we strongly advise organizations to focus mitigation efforts on the privilege acquisition phase (2) for both rapid destruction and targeted data theft attacks, and not prioritize blocking at the process execution phase (3).

Figure 3:Most Petya propagations were due to impersonation (credential theft)

Because of the dual channel approach to propagation, even an organization that had reached 97% of their endpoints with MS17-010 patching was infected enterprise-wide by Petya. This shows that mitigating just one vector is not enough.

The good news here is that any investment made into credential theft defenses (as well as patching and other defenses) will directly benefit your ability to stave off targeted data theft attacks because Petya simply re-used attack methods popularized in those attacks.

Attack and Recovery Experience: Learnings from Petya

Many impacted organizations were not prepared for this type of disaster in their disaster recovery plan. The key areas of learnings from real world cases of these attacks are:

Figure 4:Common learnings from rapid cyberattack recovery

Offline Recovery Required Many organizations affected by Petya found that their backup applications and Operating System (OS) deployment systems were taken out in the attack, significantly delaying their ability to recover business operations. In some cases, IT staff had to resort to printed documentation because the servers housing their recovery process documentation were also down.

Communications down Many organizations also found themselves without standard corporate communications like email. In almost all cases, company communications with employees was reliant on alternate mechanisms like WhatsApp, copy/pasting broadcast text messages, mobile phones, personal email addresses, and Twitter.

In several cases, organizations had a fully functioning Office 365 instance (SaaS services were unaffected by this attack), but users couldnt access Office 365 services because authentication was federated to the on premises Active Directory (AD), which was down.

More information

To learn more about rapid cyber attacks and how to protect against them, watch the on-demand webinar: Protect Against Rapid Cyberattacks (Petya, WannaCrypt, and similar).

Look out for the next and final blog post of a 3-part series to learn about Microsoft’s recommendations on mitigating rapid cyberattacks.

Categories: Uncategorized Tags:

IGF proves the value of bottom-up, multi-stakeholder model in cyberspace policy-making

January 29th, 2018 No comments

In December, the Internet Governance Forum (IGF) brought the world together to talk about the internet. I tend to take a definite interest in cybersecurity, but there were many more important topics discussed. They ranged from diversity in the technology sector through to philosophy in the digital age. Cybersecurity was, nonetheless, a major theme. My colleagues and I found an agenda packed with varied sessions that sought to tackle anything from effective cooperation between CERTS, the difficulties in developing an international framework for cybersecurity norms and other issues the Digital Geneva Convention touches on, to the very real cross-border legal challenges in cloud forensics.

The real strength of the IGF is not just its breadth of topics, but also the way in which it deliberately fosters multi-stakeholder discussions. Delegates have equal voices, whether they are civil society groups, governments, or businesses. And while there were differences in opinion and perspectives, all are heard and as such contribute to richer conversations, and ultimately more valuable outcomes.

Certainly, the expectation is not that there would be immediate policy outcomes from the IGF. Ideas need time to grow and evolve. The exchanges of ideas can and does contribute to decision-making for Microsoft, and hopefully across the other participants attending. I found it particularly valuable to hear the voices and opinions of the civil society. Whether it was hearing a perspective of humanitarian actors, or understanding the challenges related to cybersecurity policy making in emerging markets.

Microsoft believes that this wider discussion among stakeholders leads to deeper understanding of the complex challenges posed by cyberspace. Thats why we took the opportunity of this years IGF to organize a series of both smaller and individualized, as well as larger discussions around the different aspects of our proposal for a Digital Geneva Convention. The discussions investigated what the industry tech accord could involve and what the civil society would like us to do as an industry, but they also looked at the feasibility of creating a convention that would protect civilians and civilian infrastructure in cyberspace from harm by states and at what the path on that decade long road would be. We will be taking these insights and ideas back with us and incorporating them into our plans for 2018.

The Digital Geneva Convention was however by far not the only cybersecurity-focused topic we engaged in. There were sessions that looked at increasing CERT capacities, encryption, the exchange of cybersecurity best practices within IGF, as well those that sought to outline the future of global cybersecurity capacity building, which we believe is essential to the worlds collective ability to respond to cyber-attacks and needed both for individual countries and at the level of regional groupings such as ASEAN and the OAS. We also previewed the research that we are planning to publish shortly that looks at the latest global cybersecurity policy and legislative trends, analyzing data from over 100 countries and highlighting increased activity across critical infrastructure policies, militarization of cyberspace continues, expansion of law enforcement powers, cybercrime legislation, and cybersecurity skills concerned. Overall, my colleagues across Microsoft contributed to over 20 different sessions and panels, including on affordable access to the internet, where we were able to outline elements of our Airband Initiative, digital civility, where we presented the results of our latest study (to be released publicly shortly), future of work and artificial intelligence, and others.

Multi-stakeholder fora like the IGF are essential to preserving an open, global, safe, secure, resilient, and interconnected Internet. What the world needs is more such broad-based, holistic policy discussions. When it comes to building policy in cyberspace, policy-makers must acknowledge the interdependence of economic, socio-cultural, technological, and governance factors. That means they should actively foster more multi-stakeholder policy development for a, learning from the IGF. For the technology sector and civil society groups, our task must be to continue to push for inclusive, open, transparent, bottom-up policy-making, and to make the most of the opportunities that do exist.

Categories: Uncategorized Tags:

Understanding the performance impact of Spectre and Meltdown mitigations on Windows Systems

January 9th, 2018 No comments

Last week the technology industry and many of our customers learned of new vulnerabilities in the hardware chips that power phones, PCs and servers. We (and others in the industry) had learned of this vulnerability under nondisclosure agreement several months ago and immediately began developing engineering mitigations and updating our cloud infrastructure. In this blog, Ill describe the discovered vulnerabilities as clearly as I can, discuss what customers can do to help keep themselves safe, and share what weve learned so far about performance impacts.

What Are the New Vulnerabilities?

On Wednesday, Jan. 3, security researchers publicly detailed three potential vulnerabilities named Meltdown and Spectre. Several blogs have tried to explain these vulnerabilities further a clear description can be found via Stratechery.

On a phone or a PC, this means malicious software could exploit the silicon vulnerability to access information in one software program from another. These attacks extend into browsers where malicious JavaScript deployed through a webpage or advertisement could access information (such as a legal document or financial information) across the system in another running software program or browser tab. In an environment where multiple servers are sharing capabilities (such as exists in some cloud services configurations), these vulnerabilities could mean it is possible for someone to access information in one virtual machine from another.

What Steps Should I Take to Help Protect My System?

Currently three exploits have been demonstrated as technically possible. In partnership with our silicon partners, we have mitigated those through changes to Windows and silicon microcode.

Exploited Vulnerability CVE Exploit
Name
Public Vulnerability Name Windows Changes Silicon Microcode Update ALSO Required on Host
Spectre 2017-5753 Variant 1 Bounds Check Bypass Compiler change; recompiled binaries now part of Windows Updates

Edge & IE11 hardened to prevent exploit from JavaScript

No
Spectre 2017-5715 Variant 2 Branch Target Injection Calling new CPU instructions to eliminate branch speculation in risky situations Yes
Meltdown 2017-5754 Variant 3 Rogue Data Cache Load Isolate kernel and user mode page tables No

 

Because Windows clients interact with untrusted code in many ways, including browsing webpages with advertisements and downloading apps, our recommendation is to protect all systems with Windows Updates and silicon microcode updates.

For Windows Server, administrators should ensure they have mitigations in place at the physical server level to ensure they can isolate virtualized workloads running on the server. For on-premises servers, this can be done by applying the appropriate microcode update to the physical server, and if you are running using Hyper-V updating it using our recent Windows Update release. If you are running on Azure, you do not need to take any steps to achieve virtualized isolation as we have already applied infrastructure updates to all servers in Azure that ensure your workloads are isolated from other customers running in our cloud. This means that other customers running on Azure cannot attack your VMs or applications using these vulnerabilities.

Windows Server customers, running either on-premises or in the cloud, also need to evaluate whether to apply additional security mitigations within each of their Windows Server VM guest or physical instances. These mitigations are needed when you are running untrusted code within your Windows Server instances (for example, you allow one of your customers to upload a binary or code snippet that you then run within your Windows Server instance) and you want to isolate the application binary or code to ensure it cant access memory within the Windows Server instance that it should not have access to. You do not need to apply these mitigations to isolate your Windows Server VMs from other VMs on a virtualized server, as they are instead only needed to isolate untrusted code running within a specific Windows Server instance.

We currently support 45 editions of Windows. Patches for 41 of them are available now through Windows Update. We expect the remaining editions to be patched soon. We are maintaining a table of editions and update schedule in our Windows customer guidance article.

Silicon microcode is distributed by the silicon vendor to the system OEM, which then decides to release it to customers. Some system OEMs use Windows Update to distribute such microcode, others use their own update systems. We are maintaining a table of system microcode update information here. Surface will be updated through Windows Update starting today.

 

Guidance on how to check and enable or disable these mitigations can be found here:

Performance

One of the questions for all these fixes is the impact they could have on the performance of both PCs and servers. It is important to note that many of the benchmarks published so far do not include both OS and silicon updates. Were performing our own sets of benchmarks and will publish them when complete, but I also want to note that we are simultaneously working on further refining our work to tune performance. In general, our experience is that Variant 1 and Variant 3 mitigations have minimal performance impact, while Variant 2 remediation, including OS and microcode, has a performance impact.

Here is the summary of what we have found so far:

  • With Windows 10 on newer silicon (2016-era PCs with Skylake, Kabylake or newer CPU), benchmarks show single-digit slowdowns, but we dont expect most users to notice a change because these percentages are reflected in milliseconds.
  • With Windows 10 on older silicon (2015-era PCs with Haswell or older CPU), some benchmarks show more significant slowdowns, and we expect that some users will notice a decrease in system performance.
  • With Windows 8 and Windows 7 on older silicon (2015-era PCs with Haswell or older CPU), we expect most users to notice a decrease in system performance.
  • Windows Server on any silicon, especially in any IO-intensive application, shows a more significant performance impact when you enable the mitigations to isolate untrusted code within a Windows Server instance. This is why you want to be careful to evaluate the risk of untrusted code for each Windows Server instance, and balance the security versus performance tradeoff for your environment.

For context, on newer CPUs such as on Skylake and beyond, Intel has refined the instructions used to disable branch speculation to be more specific to indirect branches, reducing the overall performance penalty of the Spectre mitigation. Older versions of Windows have a larger performance impact because Windows 7 and Windows 8 have more user-kernel transitions because of legacy design decisions, such as all font rendering taking place in the kernel. We will publish data on benchmark performance in the weeks ahead.

Conclusion

As you can tell, there is a lot to this topic of side-channel attack methods. A new exploit like this requires our entire industry to work together to find the best possible solutions for our customers. The security of the systems our customers depend upon and enjoy is a top priority for us. Were also committed to being as transparent and factual as possible to help our customers make the best possible decisions for their devices and the systems that run organizations around the world. Thats why weve chosen to provide more context and information today and why we released updates and remediations as quickly as we could on Jan. 3. Our commitment to delivering the technology you depend upon, and in optimizing performance where we can, continues around the clock and we will continue to communicate as we learn more.

-Terry

Categories: Uncategorized Tags:

Application fuzzing in the era of Machine Learning and AI

January 3rd, 2018 No comments

Proactively testing software for bugs is not new. The earliest examples date back to the 1950s with the term fuzzing. Fuzzing as we now refer to it is the injection of random inputs and commands into applications. It made its debut quite literally on a dark and stormy night in 1988. Since then, application fuzzing has become a staple of the secure software development lifecycle (SDLC), and according to Gartner*, security testing is growing faster than any other security market, as AST solutions adapt to new development methodologies and increased application complexity.

We believe there is good reason for this. The overall security risk profile of applications has grown in lockstep with accelerated software development and application complexity. Hackers are also aware of the increased vulnerabilities and, as the recent Equifax breach highlights, the application layer is highly targeted. Despite this, the security and development groups within organizations cannot find easy alignment to implement application fuzzing.

While DevOps is transforming the speed at which applications are created, tested, and integrated with IT, that same efficiency hampers the ability to mitigate identified security risks and vulnerabilities, without impacting business priorities. This is exactly the promise that machine learning, artificial intelligence (AI), and the use of deep neural networks (DNN) are expected to deliver on in evolved software vulnerability testing.

Most customers I talk to see AI as a natural next step given that most software testing for bugs and vulnerabilities is either manual or prone to false positives. With practically every security product claiming to be machine learning and AI-enabled, it can be hard to understand which offerings can deliver real value over current approaches.

Adoption of the latest techniques for application security testing doesnt mean CISOs must become experts in machine learning. Companies like Microsoft are using the on-demand storage and computing power of the cloud, combined with experience in software development and data science, to build security vulnerability mitigation tools that embed this expertise in existing systems for developing, testing, and releasing code. It is important, however, to understand your existing environment, application inventory, and testing methodologies to capture tangible savings in cost and time. For many organizations, application testing relies on tools that use business logic and common coding techniques. These are notoriously error-prone and devoid of security expertise. For this latter reason, some firms turn to penetration testing experts and professional services. This can be a costly, manual approach to mitigation that lengthens software shipping cycles.

Use cases

Modern application security testing that is continuous and integrated with DevOps and SecOps can be transformative for business agility and security risk management. Consider these key use cases and whether your organization has embedded application security testing for each:

  • Digital Transformation moving applications to the cloud creates the need to re-establish security controls and monitoring. Fuzzing can uncover errors and missed opportunities to shore up defenses. Automated and integrated fuzzing can further preserve expedited software shipping cycles and business agility.
  • Securing the Supply Chain Open Source Software (OSS) and 3rd party applications are a common vector of attack, as we saw with Petya, so a testing regimen is a core part of a plan to manage 3rd party risk.
  • Risk Detection whether building, maintaining, or refactoring applications on premises, the process and risk profile have become highly dynamic.Organizations need to be proactive to uncover bugs, holes and configuration errors on a continuous basis to meet both internal and regulatory risk management mandates.

Platform leverage

Of course, software development and testing are about more than just tools. The process to communicate risks to all stakeholders, and to act, is where the real benefit materializes. A barrier to effective application security testing is the highly siloed way that testing and remediation are conducted. Development waits for IT and security professionals to implement the changesslowing deployment and time to market. Legacy application security testing is ready for disruption and the built-in approach can deliver long-awaited efficiency in the development and deployment pipeline. Digital transformation, supply chain security, and risk detection all benefit from speed and agility. Lets consider the DevOps and SecOps workflows possible on a Microsoft-based application security testing framework:

  • DevOps Continuous fuzzing built into the DevOps pipeline identifies bugs and feeds them to the continuous integration and deployment environment (i.e. Visual Studio Team Services and Team Foundation Server). Developers and stakeholders are uniformly advised of risky code and provided the option of running additional Azure-based fuzzing techniques. For apps in production that are found to be running risky code, IT pros can mitigate risks by using PowerShell and Group Policy (GPO) to enable the features of Windows Defender Exploit Guard. While the apps continue to run, the attack surface can be reduced, and connection scenarios which increase risk are blocked. This gives teams time to develop and implement mitigations without having to take the applications entirely offline.
  • SecOps – Azure-hosted containers and VMs, as well as on-premise machines, are scanned for risky applications and code including OSS. The results inform Microsofts various desktop, mobile, and server threat protection regimes, including application whitelisting. Endpoints can be scanned for the presence of the risky code and administrators are informed through Azure Security Center. Mitigations can also be deployed to block those applications implicated and enforce conditional access through Azure Active Directory.

Cloud and AI

Machine learning and artificial intelligence are not new, but the relatively recent availability of graphics processing units (GPUs) have brought their potential to mainstream by enabling faster (parallel) processing of large amounts of data. Our recently announced Microsoft Risk Detection (MSRD) service is a showcase of the power of the cloud and AI to evolve fuzz testing. In fact, Microsofts award winning work in a specialized area of AI called constraint solving has been 10 years in the making and was used to produce the worlds first white-box fuzzer.

A key to effective application security testing is the inputs or seeds used to establish code paths and bring about crashes and bug discovery. These inputs can be static and predetermined, or in the case of MSRD, dynamic and mutated by training algorithms to generate relevant variations based on previous runs. While AI and constraint solving are used to tune the reasoning for finding bugs, Azure Resource Manager dynamically scales the required compute up or down creating a fuzzing lab that is right-sized for the customers requirement. The Azure based approach also gives customers choices in running multiple fuzzers, in addition to Microsofts own, so the customer gets value from several different methods of fuzzing.

The future

For Microsoft, application security testing is fundamental to a secure digital transformation. MSRD for Windows and Linux workloads is yet another example of our commitment to building security into every aspect of our platform. While our AI-based application fuzzing is unique, Microsoft Research is already upping the ante with a new project for neural fuzzing. Deep neural networks are an instantiation of machine learning that model the human brain. Their application can improve how MSRD identifies fuzzing locations and the strategies and parameters used. Integration with our security offerings is in the initial phases, and by folding in more capabilities over time we remove the walls between IT, developers, and security, making near real-time risk mitigation a reality. This is the kind of disruption that, as a platform company, Microsoft uniquely brings to application security testing for our customers and serves as further testament for the power of built-in.


* Gartner: Magic Quadrant for Application Security Testing published: 28 February 2017 ID: G00290926

Categories: Uncategorized Tags:

How public-private partnerships can combat cyber adversaries

December 13th, 2017 No comments

For several years now, policymakers and practitioners from governments, CERTs, and the security industry have been speaking about the importance of public-private partnerships as an essential part of combating cyber threats. It is impossible to attend a security conference without a keynote presenter talking about it. In fact, these conferences increasingly include sessions or entire tracks dedicated to the topic. During the three conferences Ive attended since Junetwo US Department of Defense symposia, and NATOs annual Information Symposium in Belgium, the message has been consistent: public-private information-sharing is crucial to combat cyber adversaries and protect users and systems.

Unfortunately, we stink at it. Information-sharing is the Charlie Brown football of cyber: we keep running toward it only to fall flat on our backs as attackers continually pursue us. Just wait til next year. Its become easier to talk about the need to improve information-sharing than to actually make it work, and its now the technology industrys convenient crutch. Why? Because no one owns it, so no one is accountable. I suspect we each have our own definition of what information-sharing means, and of what success looks like. Without a sharp vision, can we really expect it to happen?

So, what can be done?

First, some good news: the security industry wants to do this–to partner with governments and CERTs. So, when we talk about it at conferences, or when a humble security advisor in Redmond blogs about it, its because we are committed to finding a solution. Microsoft recently hosted BlueHat, where hundreds of malware hunters, threat analysts, reverse engineers, and product developers from the industry put aside competitive priorities to exchange ideas and build partnerships. In my ten years with Microsoft, Ive directly participated in and led information-sharing initiatives that we established for the very purpose of advancing information assurance and protecting cyberspace. In fact, in 2013, Microsoft created a single legal and programmatic framework to address this issue, the Government Security Program.

For the partnership to work, it is important to understand and anticipate the requirements and needs of government agencies. For example, we need to consider cyber threat information, YARA rules, attacker campaign details, IP address, host, network traffic, and the like.

What can governments and CERTs do to better partner with industry?

  • Be flexible, especially on the terms. Communicate. Prioritize. In my experience, the mean-time-to-signature for a government to negotiate an info-sharing agreement with Microsoft is between six months and THREE YEARS.
  • Prioritize information sharing. If this is already a priority, close the gap. I fear governments attorneys are not sufficiently aware of how important the agreements are to their constituents. The information-sharing agreements may well be non-traditional agreements, but if information-sharing is truly a priority, lets standardize and expedite the agreements. Start by reading the 6 Nov Department of Homeland Security OIG report, DHS Can Improve Cyber Threat Information-Sharing document.
  • Develop and share with industry partners a plan to show how government agencies will consume and use our data. Let industry help government and CERTs improve our collective ROI. Before asking for data, lets ensure it will be impactful.
  • Develop KPIs to measure whether an information-sharing initiative is making a difference, quantitative or qualitative. In industry, we could do a better job at this, as we generally assume that were providing information for the right reason. However, I frequently question whether our efforts make a real difference. Whether we look for mean-time-to-detection improvements or other metrics, this is an area for improvement.
  • Commit to feedback. Public-private information-sharing implies two-way communication. Understand that more companies are making feedback a criterion to justify continuing investment in these not-for-profit engagements. Feedback helps us justify up the chain the efficacy of efforts that we know are important. It also improves two-way trust and contributes to a virtuous cycle of more and closer information-sharing. At Microsoft, we require structured feedback as the price of entry for a few of our programs.
  • Balance interests in understanding todays and tomorrows threats with an equal commitment to lock down what is currently owned.(My favorite) Information-sharing usually includes going after threat actors and understanding whats coming next. Thats important, but in an assume compromise environment, we need to continue to hammer on the basics:

    • Patch.If an integrator or on-site provider indicates patching and upgrading will break an application, and if that is used as an excuse not to patch, that is a problem. Authoritative third-parties such as US-CERT, SANS, and others recommend a 48- to 72-hour patch cycle. Review www.microsoft.com/secure to learn more.

      • Review www.microsoft.com/sdl to learn more about tackling this issue even earlier in the IT development cycle, and how to have important conversations with contractors, subcontractors,and ISVs in the software and services supply chain.

    • Reduce administrative privilege. This is especially important for contractor or vendor accounts. Up to 90 percent of breaches come from credential compromise. This is largely caused by a lack of, or obsolete, administrative, physical and technical controls to sensitive assets. Basic information-sharing demands that we focus on this. Here is guidance regarding securing access.

Ultimately, we in the industry can better serve governments and CERTs by incentivizing migrations to newer platforms which offer more built-in security; and that are more securely developed. As we think about improving information-sharing, lets be clear that this includes not only sharing technical details about threats and actors but also guidance on making governments fundamentally more secure on newer and more secure technologies.

 

Categories: Uncategorized Tags:

A decade inside Microsoft Security

November 9th, 2017 No comments

Ten years ago, I walked onto Microsofts Redmond campus to take a role on a team that partnered with governments and CERTs on cybersecurity. Id just left a meaningful career in US federal government service because I thought it would be fascinating to experience first-hand the security challenges and innovation from the perspective of the IT industry, especially within Microsoft, given its presence around the US federal government. I fully expected to spend a year or two in Microsoft and then resume my federal career with useful IT industry perspectives on security. Two days after I started, Popular Sciences annual Ten worst jobs in science survey came out, and I was surprised to see Microsoft Security Grunt in sixth place. Though the article was tongue-in-cheek, saluting those who take on tough challenges, the fact that we made this ignominious list certainly made me wonder if Id made a huge mistake.

I spent much of my first few years hearing from government and enterprise executives that Microsoft was part of the security problem. Working with so many hard-working engineers, researchers, security architects, threat hunters, and developers trying to tackle these increasingly complex challenges, I disagreed. But, we all recognized that we needed to do more to defend the ecosystem, and to better articulate our efforts. Wed been investing in security well before 2007, notably with the Trustworthy Computing Initiative and Security Development Lifecycle, and we continue to invest heavily in technologies and people – we now employ over 3,500 people in security across the company. I rarely hear anymore that we are perceived as a security liability, but our work isnt done. Ten years later, Im still here, busier than ever, delaying my long-expected return to federal service, helping enterprise CISOs secure their environments, their users, and their data.

Complexity vs. security

Is it possible, however, that our industrys investments in security have created another problem – that of complexity? Have we innovated our way into a more challenging situation? My fellow security advisors at Microsoft have shared customer frustrations over the growing security vendor presence in their environments. While these different technologies may solve specific requirements, in doing so, they create a management headache. Twice this week in Redmond, CISOs from large manufacturers challenged me to help them better understand security capabilities they already owned from Microsoft, but werent aware of. They sought to use this discovery process to identify opportunities to rationalize their security vendor presence. As one CISO said, Just help me simplify all of this.

There is a large ecosystem of very capable and innovative professionals delivering solutions into a vibrant and crowded security marketplace. With all of this IP, how can we best help CISOs use important innovation while reducing complexity in their environments? And, can we help them maximize value from their investments without sacrificing security and performance?

Best-of-suite capabilities

Large enterprises may employ up to 100 vendors technologies to handle different security functions. Different vendors may handle identity and access management, data loss prevention, key management, service management, cloud application security, and so on. Many companies are now turning to machine learning and user behavior technologies. Many claim best of breed or best in class, capabilities and there is impressive innovation in the marketplace. Recognizing this, we have made acquisition a part of Microsofts security strategy – since 2013 weve acquired companies like Aorato, Secure Islands, Adallom, and most recently Hexadite.

Microsofts experience as a large global enterprise is similar to our enterprise customers. Weve been working to rationalize the 100+ different security providers in our infrastructure to help us better manage our external dependencies and more efficiently manage budgets. Weve been moving toward a default policy of Microsoft first security technology where possible in our environment. Doing so helps us standardize on newer and familiar technologies that complement each other.

That said, whether we build or buy, our focus is to deliver an overall best in suite approach to help customers deploy, maintain, monitor, and protect our enterprise products and services as securely as possible. We are investing heavily in the Intelligent Security Graph. It leverages our vast security intelligence, connects and correlates information, and uses advanced analytics to help detect and respond to threats faster. If you are already working with Microsoft to advance your productivity and collaboration needs by deploying Windows 10, Office 365, Azure, or other core enterprise services, you should make better use of these investments and reduce dependency on third-party solutions by taking advantage of built-in monitoring and detection capabilities in these solutions. A best-of-suite approach also lowers the costs and complexity of administering a security program, e.g. making vendor assessments and procurement easier, reducing training and learning curves, and standardizing on common dashboards.

Reducing complexity also requires that we make our security technologies easy to acquire and use. Here are some interesting examples of how our various offerings connect to each other and have built-in capabilities:

  • The Windows Defender Advanced Threat Protection(ATP) offer seamlessly integrates with O365 ATP to provide more visibility into adversary activity against devices and mailboxes, and to give your security teams more control over these resources. Watch this great video to learn more about the services integration. Windows Defender ATP monitors behaviors on a device and sends alerts on suspicious activities. The console provides your security team with the ability to perform one-click actions such as isolating a machine, collecting a forensics package, and stopping and quarantining files. You can then track the kill chain into your O365 environment if a suspicious file on the device arrived via email. Once in O365 ATP, you can quarantine the email, detonate a potentially malicious payload, block the traffic from your environment, and identify other users who may have been targeted.
  • Azure Information Protection provides built-in capabilities to classify and label data, apply rights-management protections (that follows the data object) and gives data owners and admins visibility into, and control over, where that data goes and whether recipients attempt to violate policy.

Thousands of companies around the world are innovating, competing, and partnering to defeat adversaries and to secure the computing ecosystem. No single company can do it all. But by making it as convenient as possible for you to acquire and deploy technologies that integrate, communicate and complement each other, we believe we can offer a best-of-suite benefit to help secure users, devices, apps, data, and infrastructure. Visit https://www.microsoft.com/secure to learn about our solutions and reach out to your local Microsoft representative to learn more about compelling security technologies that you may already own. For additional information, and to stay on top of our investments in security, bookmark this Microsoft Secure blog.


Mark McIntyre, CISSP, is an Executive Security Advisor (ESA) in the Microsoft Enterprise and Cybersecurity Group. Mark works with global public sector and commercial enterprises, helping them transform their businesses while protecting data and assets by moving securely to the Cloud. As an ESA, Mark supports CISOs and their teams with cybersecurity reviews and planning. He also helps them understand Microsofts perspectives on the evolving cyber threat landscape and how Microsoft defends its enterprise, employees and users around the world.

Categories: Uncategorized Tags:

SSN for authentication is all wrong

October 23rd, 2017 No comments

Unless you were stranded on a deserted island or participating in a zen digital fast chances are youve heard plenty about the massive Equifax breach and the head-rolling fallout. In the flurry of headlines and advice about credit freezes an important part of the conversation was lost: if we didnt misuse our social security numbers, losing them wouldnt be a big deal. Let me explain: most people, and that mainly includes some pretty high-up identity experts that Ive met in my travels, dont understand the difference between identification and verification. In the real world, conflating those two points doesnt often have dire consequences. In the digital world, its a huge mistake that can lead to severe impacts.

Isnt it all just authentication you may ask? Well, yes, identification and verification are both parts of the authentication whole, but failure to understand the differences is where the mess comes in. However, one reason its so hard for many of us to separate identification and verification is that historically we havent had to. Think back to how humans authenticated to each other before the ability to travel long distances came into the picture. Our circle of acquaintances was pretty small and we knew each other by sight and sound. Just by looking at your neighbor, Bob, you could authenticate him. If you met a stranger, chances are someone else in the village knew the stranger and could vouch for her.

The ability to travel long distances changed the equation a bit. We developed documents that provided verification during the initiation phase, for example when you have to bring a birth certificate to the DMV to get your initial drivers license. And ongoing identification like a unique ID and a photo. These documents served as a single identification and verification mechanism. And that was great! Worked fine for years, until the digital age.

The digital age changed the model because rather than one person holding a single license with their photo on it, we had billions of people trying to authenticate to billions of systems with simple credentials like user name and password. And no friendly local villager to vouch for us.

Who are you? Prove it!

This is where the difference between the two really starts to matter. Identification answers the question: Who are you? Your name is an identifier. It could also be an alias, such as your unique employee ID number.

Do you want your name to be private? Imagine meeting another parent at your kids soccer game and refusing to tell them your name for security reasons. How about: Oh your new puppy is so adorable, whats her name? And you respond, If I told you, Id have to kill you. Or you try to find an address in a town with no street signs because the town is super security conscious. Ridiculous, right? Identifiers are public specifically so we can share them to help identify things.

We also want consistency in our identifiers. Imagine if that town had street signs, but changed the names of the streets every 24 hours for security reasons. And uniqueness, if every street had the same name, youd still have a heck of a time finding the right address wouldnt you?

Now that were clear on what the identifier is, we can enumerate a few aspects that make up a really good one:

  • Public
  • Unchanging
  • Unique

In a town or public road, we have a level of trust that the street sign is correct because the local authorities have governance over road signs. Back in our village, we trust Bob is Bob because we can verify him ourselves. But in the digital world, things get pretty tricky how do you verify someone or something youve never met before? Ask them to- Prove It!

We use these two aspects of authentication almost daily when we log into systems with a user ID (identification) and password (verification). How we verify in the real world can be public, unchanging, and unique because its very hard to forge a whole person. Or to switch all the street signs in a town. But verification online is trickier. We need to be able toprovide verification of who we are to a number of entities, many of whom arent great at protecting data. And if the same verification is re-used across entities, and one loses it, attackers could gain access to every site where it was used. This is why experts strongly recommend using unique passwords for every website/app. This goes for those challenge questions too. Which can lead to some fun calls with customer service, Oh, the town where I was born? Its: xja*21njaJK)`jjAQ^. At this point in time our fathers middle name, first pets name, town where we were born, school we went to and address history should be assumed public, using them as secrets for verification doesnt make sense anymore.

If one site loses your digital verification info, no worries. You only used it for that site and can create new info for the next one. What if you couldnt change your password ever? It was permanent and also got lost during the Yahoo! breach? And it was the one you use at your bank, and for your college and car loans, and your health insurance? How would you feel?

So, with that in mind, youd probably agree that the best digital verifiers are:

  • Private
  • Easily changed
  • Unique

Your turn

OK, now that you know the difference between identification and verification and the challenges of verification in a digital world, what do you think – Is your SSN a better identifier or verifier?

Categories: cybersecurity, Data Privacy, Tips & Talk Tags: