Archive

Archive for the ‘Security Development’ Category

Success in security: reining in entropy

May 20th, 2020 No comments

Your network is unique. It’s a living, breathing system evolving over time. Data is created. Data is processed. Data is accessed. Data is manipulated. Data can be forgotten. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment. No two networks on the planet are exactly the same, even if they operate within the same industry, utilize the exact same applications, and even hire workers from one another. In fact, the only attribute your network may share with another network is simply how unique they are from one another.

If we follow the analogy of an organization or network as a living being, it’s logical to drill down deeper, into the individual computers, applications, and users that function as cells within our organism. Each cell is unique in how it’s configured, how it operates, the knowledge or data it brings to the network, and even the vulnerabilities each piece carries with it. It’s important to note that cancer begins at the cellular level and can ultimately bring down the entire system. But where incident response and recovery are accounted for, the greater the level of entropy and chaos across a system, the more difficult it becomes to locate potentially harmful entities. Incident Response is about locating the source of cancer in a system in an effort to remove it and make the system healthy once more.

Let’s take the human body for example. A body that remains at rest 8-10 hours a day, working from a chair in front of a computer, and with very little physical activity, will start to develop health issues. The longer the body remains in this state, the further it drifts from an ideal state, and small problems begin to manifest. Perhaps it’s diabetes. Maybe it’s high blood pressure. Or it could be weight gain creating fatigue within the joints and muscles of the body. Your network is similar to the body. The longer we leave the network unattended, the more it will drift from an ideal state to a state where small problems begin to manifest, putting the entire system at risk.

Why is this important? Let’s consider an incident response process where a network has been compromised. As a responder and investigator, we want to discover what has happened, what the cause was, what the damage is, and determine how best we can fix the issue and get back on the road to a healthy state. This entails looking for clues or anomalies; things that stand out from the normal background noise of an operating network. In essence, let’s identify what’s truly unique in the system, and drill down on those items. Are we able to identify cancerous cells because they look and act so differently from the vast majority of the other healthy cells?

Consider a medium-size organization with 5,000 computer systems. Last week, the organization was notified by a law enforcement agency that customer data was discovered on the dark web, dated from two weeks ago. We start our investigation on the date we know the data likely left the network. What computer systems hold that data? What users have access to those systems? What windows of time are normal for those users to interact with the system? What processes or services are running on those systems? Forensically we want to know what system was impacted, who was logging in to the system around the timeframe in question, what actions were performed, where those logins came from, and whether there are any unique indicators. Unique indicators are items that stand out from the normal operating environment. Unique users, system interaction times, protocols, binary files, data files, services, registry keys, and configurations (such as rogue registry keys).

Our investigation reveals a unique service running on a member server with SQL Server. In fact, analysis shows that service has an autostart entry in the registry and starts the service from a file in the c:\windows\perflogs directory, which is an unusual location for an autostart, every time the system is rebooted. We haven’t seen this service before, so we investigate against all the systems on the network to locate other instances of the registry startup key or the binary files we’ve identified. Out of 5,000 systems, we locate these pieces of evidence on only three systems, one of which is a Domain Controller.

This process of identifying what is unique allows our investigative team to highlight the systems, users, and data at risk during a compromise. It also helps us potentially identify the source of attacks, what data may have been pilfered, and foreign Internet computers calling the shots and allowing access to the environment. Additionally, any recovery efforts will require this information to be successful.

This all sounds like common sense, so why cover it here? Remember we discussed how unique your network is, and how there are no other systems exactly like it elsewhere in the world? That means every investigative process into a network compromise is also unique, even if the same attack vector is being used to attack multiple organizational entities. We want to provide the best foundation for a secure environment and the investigative process, now, while we’re not in the middle of an active investigation.

The unique nature of a system isn’t inherently a bad thing. Your network can be unique from other networks. In many cases, it may even provide a strategic advantage over your competitors. Where we run afoul of security best practice is when we allow too much entropy to build upon the network, losing the ability to differentiate “normal” from “abnormal.” In short, will we be able to easily locate the evidence of a compromise because it stands out from the rest of the network, or are we hunting for the proverbial needle in a haystack? Clues related to a system compromise don’t stand out if everything we look at appears abnormal. This can exacerbate an already tense response situation, extending the timeframe for investigation and dramatically increasing the costs required to return to a trusted operating state.

To tie this back to our human body analogy, when a breathing problem appears, we need to be able to understand whether this is new, or whether it’s something we already know about, such as asthma. It’s much more difficult to correctly identify and recover from a problem if it blends in with the background noise, such as difficulty breathing because of air quality, lack of exercise, smoking, or allergies. You can’t know what’s unique if you don’t already know what’s normal or healthy.

To counter this problem, we pre-emptively bring the background noise on the network to a manageable level. All systems move towards entropy unless acted upon. We must put energy into the security process to counter the growth of entropy, which would otherwise exponentially complicate our security problem set. Standardization and control are the keys here. If we limit what users can install on their systems, we quickly notice when an untrusted application is being installed. If it’s against policy for a Domain Administrator to log in to Tier 2 workstations, then any attempts to do this will stand out. If it’s unusual for Domain Controllers to create outgoing web traffic, then it stands out when this occurs or is attempted.

Centralize the security process. Enable that process. Standardize security configuration, monitoring, and expectations across the organization. Enforce those standards. Enforce the tenet of least privilege across all user levels. Understand your ingress and egress network traffic patterns, and when those are allowed or blocked.

In the end, your success in investigating and responding to inevitable security incidents depends on what your organization does on the network today, not during an active investigation. By reducing entropy on your network and defining what “normal” looks like, you’ll be better prepared to quickly identify questionable activity on your network and respond appropriately. Bear in mind that security is a continuous process and should not stop. The longer we ignore the security problem, the further the state of the network will drift from “standardized and controlled” back into disorder and entropy. And the further we sit from that state of normal, the more difficult and time consuming it will be to bring our network back to a trusted operating environment in the event of an incident or compromise.

The post Success in security: reining in entropy appeared first on Microsoft Security.

Protect your accounts with smarter ways to sign in on World Passwordless Day

May 7th, 2020 No comments

As the world continues to grapple with COVID-19, our lives have become increasingly dependent on digital interactions. Operating at home, we’ve had to rely on e-commerce, telehealth, and e-government to manage the everyday business of life. Our daily online usage has increased by over 20 percent. And if we’re fortunate enough to have a job that we can do from home, we’re accessing corporate apps from outside the company firewall.

Whether we’re signing into social media, mobile banking, or our workplace, we’re connecting via online accounts that require a username and password. The more we do online, the more accounts we have. It becomes a hassle to constantly create new passwords and remember them. So, we take shortcuts. According to a Ponemon Institute study, people reuse an average of five total passwords, both business and personal. This is one aspect of human nature that hackers bet on. If they get hold of one password, they know they can use it pry open more of our digital lives. A single compromised password, then, can create a chain reaction of liability.

No matter how strong or complex a password is, it’s useless if a bad actor can socially engineer it away from us or find it on the dark web. Plus, passwords are inconvenient and a drain on productivity. People spend hours each year signing into applications and recovering or resetting forgotten usernames and passwords. This activity doesn’t make things more secure. It only drives up the costs of service desks.

People today are done with passwords

Users want something easier and more convenient. Administrators want something more secure. We don’t think anyone finds passwords a cause to celebrate. That’s why we’re helping organizations find smarter ways to sign in that users will love and hackers will hate. Our hope is that instead of World Password Day, we’ll start celebrating World Passwordless Day.

Passwordless Day Infographic

  • People reuse an average of five passwords across their accounts, both business and personal (Ponemon Institute survey/Yubico).
  • Average person has 90 accounts (Thycotic).
  • Fifty-five percent would prefer a method of protecting accounts that doesn’t involve passwords (Ponemon Institute survey/Yubico).
  • Sixty-seven percent of American consumers surveyed by Visa have used biometric authentication and prefer it to passwords.
  • One-hundred million to 150 million people using a passwordless method each month (Microsoft research, April 2020).

Since an average of one in every 250 corporate accounts is compromised each month, we know that relying on passwords isn’t a good enterprise defense strategy. As companies continue to add more business applications to their portfolios, the cost of passwords only goes up. In fact, companies are dedicating 30 to 60 percent of their support desk calls to password resets. Given how ineffective passwords can be, it’s surprising how many companies haven’t turned on multi-factor authentication (MFA) for their customers or employees.

Passwordless technology is here—and users are adopting it as the best experience for strong authentication. Last November at Microsoft Ignite, we shared that more than 100 million people were already signing in using passwordless methods each month. That number has now reached over 150 million people. According to our recent survey, the use of biometrics

We now have the momentum to push forward initiatives that increase security and reduce cost. New passwordless technologies give users the benefits of MFA in one gesture. To sign in securely with Windows Hello, all you have to do is show your face or press your finger. Microsoft has built support for passwordless authentication into our products and services, including Office, Azure, Xbox, and Github. You don’t even need to create a username anymore—you can use your phone number instead. Administrators can use single sign-on in Azure Active Directory (Azure AD) to enable passwordless authentication for an unlimited number of apps through native functionality in Windows Hello, the phone-as-a-token capabilities in the Microsoft Authenticator app, or security keys built using the FIDO2 open standards.

Of course, we would never advise our customers to try anything we haven’t tried ourselves. We’re always our own first customer. Microsoft’s IT team switched to passwordless authentication and now 90 percent of Microsoft employees sign in without entering a password. As a result, hard and soft costs of supporting passwords fell by 87 percent. We expect other customers will experience similar benefits in employee productivity improvements, lower IT costs, and a stronger security posture. To learn more about our approach, watch the CISO spotlight episode with Bret Arsenault (Microsoft CISO) and myself. By taking this approach 18 months ago, we were better set up for seamless secure remote work during COVID 19.

For many of us, working from home will be a new norm for the foreseeable future. We see many opportunities for using passwordless methods to better secure digital accounts that people rely on every day. Whether you’re protecting an organization or your own digital life, every step towards passwordless is a step towards improving your security posture. Now let’s embrace the world of passwordless!

Related articles

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Protect your accounts with smarter ways to sign in on World Passwordless Day appeared first on Microsoft Security.

Secure the software development lifecycle with machine learning

April 16th, 2020 No comments

Every day, software developers stare down a long list of features and bugs that need to be addressed. Security professionals try to help by using automated tools to prioritize security bugs, but too often, engineers waste time on false positives or miss a critical security vulnerability that has been misclassified. To tackle this problem data science and security teams came together to explore how machine learning could help. We discovered that by pairing machine learning models with security experts, we can significantly improve the identification and classification of security bugs.

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time. This is an overview of how we did it.

Qualifying data for supervised learning

Our goal was to build a machine learning system that classifies bugs as security/non-security and critical/non-critical with a level of accuracy that is as close as possible to that of a security expert. To accomplish this, we needed a high-volume of good data. In supervised learning, machine learning models learn how to classify data from pre-labeled data. We planned to feed our model lots of bugs that are labeled security and others that aren’t labeled security. Once the model was trained, it would be able to use what it learned to label data that was not pre-classified. To confirm that we had the right data to effectively train the model, we answered four questions:

  • Is there enough data? Not only do we need a high volume of data, we also need data that is general enough and not fitted to a small number of examples.
  • How good is the data? If the data is noisy it means that we can’t trust that every pair of data and label is teaching the model the truth. However, data from the wild is likely to be imperfect. We looked for systemic problems rather than trying to get it perfect.
  • Are there data usage restrictions? Are there reasons, such as privacy regulations, that we can’t use the data?
  • Can data be generated in a lab? If we can generate data in a lab or some other simulated environment, we can overcome other issues with the data.

Our evaluation gave us confidence that we had enough good data to design the process and build the model.

Data science + security subject matter expertise

Our classification system needs to perform like a security expert, which means the subject matter expert is as important to the process as the data scientist. To meet our goal, security experts approved training data before we fed it to the machine learning model. We used statistical sampling to provide the security experts a manageable amount of data to review. Once the model was working, we brought the security experts back in to evaluate the model in production.

With a process defined, we could design the model. To classify bugs accurately, we used a two-step machine learning model operation. First the model learned how to classify security and non-security bugs. In the second step the model applied severity labels—critical, important, low-impact—to the security bugs.

Our approach in action

Building an accurate model is an iterative process that requires strong collaboration between subject matter experts and data scientists:

Data collection: The project starts with data science. We identity all the data types and sources and evaluate its quality.

Data curation and approval: Once the data scientist has identified viable data, the security expert reviews the data and confirms the labels are correct.

Modeling and evaluation: Data scientists select a data modeling technique, train the model, and evaluate model performance.

Evaluation of model in production: Security experts evaluate the model in production by monitoring the average number of bugs and manually reviewing a random sampling of bugs.

The process didn’t end once we had a model that worked. To make sure our bug modeling system keeps pace with the ever-evolving products at Microsoft, we conduct automated re-training. The data is still approved by a security expert before the model is retrained, and we continuously monitor the number of bugs generated in production.

More to come

By applying machine learning to our data, we accurately classify which work items are security bugs 99 percent of the time. The model is also 97 percent accurate at labeling critical and non-critical security bugs. This level of accuracy gives us confidence that we are catching more security vulnerabilities before they are exploited.

In the coming months, we will open source our methodology to GitHub.

In the meantime, you can read a published academic paper, Identifying security bug reports based solely on report titles and noisy data, for more details. Or download a short paper that was featured at Grace Hopper Celebration 2019.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. To learn more about our Security solutions visit our website.

The post Secure the software development lifecycle with machine learning appeared first on Microsoft Security.

Microsoft and partners design new device security requirements to protect against targeted firmware attacks

October 21st, 2019 No comments

Recent developments in security research and real-world attacks demonstrate that as more protections are proactively built into the OS and in connected services, attackers are looking for other avenues of exploitation with firmware emerging as a top target. In the last three years alone, NIST’s National Vulnerability Database has shown nearly a five-fold increase in the number of firmware vulnerabilities discovered.

To combat threats specifically targeted at the firmware and operating system levels, we’re announcing a new initiative we’ve been working on with partners to design what we call Secured-core PCs. These devices, created in partnership with our PC manufacturing and silicon partners, meet a specific set of device requirements that apply the security best practices of isolation and minimal trust to the firmware layer, or the device core, that underpins the Windows operating system. These devices are designed specifically for industries like financial services, government and healthcare, and for workers that handle highly-sensitive IP, customer or personal data, including PII as these are higher value targets for nation-state attackers.

 

In late 2018, security researchers discovered that hacking group, Strontium has been using firmware vulnerabilities to target systems in the wild with malware delivered through a firmware attack. As a result, the malicious code was hard to detect and difficult to remove – it could persist even across common cleanup procedures like an OS re-install or a hard drive replacement.

Why attackers and researchers are devoting more effort toward firmware

Firmware is used to initialize the hardware and other software on the device and has a higher level of access and privilege than the hypervisor and operating system kernel thereby making it an attractive target for attackers. Attacks targeting firmware can undermine mechanisms like secure boot and other security functionality implemented by the hypervisor or operating system making it more difficult to identify when a system or user has been compromised. Compounding the problem is the fact that endpoint protection and detection solutions have limited visibility at the firmware layer given that they run underneath of the operating system, making evasion easier for attackers going after firmware.

What makes a Secured-core PC?

Secured-core PCs combine identity, virtualization, operating system, hardware and firmware protection to add another layer of security underneath the operating system. Unlike software-only security solutions, Secured-core PCs are designed to prevent these kinds of attacks rather than simply detecting them. Our investments in Windows Defender System Guard and Secured-core PC devices are designed to provide the rich ecosystem of Windows 10 devices with uniform assurances around the integrity of the launched operating system and verifiable measurements of the operating system launch to help mitigate against threats taking aim at the firmware layer. These requirements enable customers to boot securely, protect the device from firmware vulnerabilities, shield the operating system from attacks, prevent unauthorized access to devices and data, and ensure that identity and domain credentials are protected.

The built-in measurements can be used by SecOps and IT admins to remotely monitor the health of their systems using System Guard runtime attestation and implement a zero-trust network rooted in hardware. This advanced firmware security works in concert with other Windows features to ensure that Secured-core PCs provide comprehensive protections against modern threats.

 

Removing trust from the firmware

Starting with Windows 8, we introduced Secure Boot to mitigate the risk posed by malicious bootloaders and rootkits that relied on Unified Extensible Firmware Interface (UEFI) firmware to only allow properly signed bootloaders like the Windows boot manager to execute. This was a significant step forward to protect against these specific types of attacks. However, since firmware is already trusted to verify the bootloaders, Secure Boot on its own does not protect from threats that exploit vulnerabilities in the trusted firmware. That’s why we worked with our partners to ensure these new Secured-core capabilities are shipped in devices right out of the box.

Using new hardware capabilities from AMD, Intel, and Qualcomm, Windows 10 now implements System Guard Secure Launch as a key Secured-core PC device requirement to protect the boot process from firmware attacks. System Guard uses the Dynamic Root of Trust for Measurement (DRTM) capabilities that are built into the latest silicon from AMD, Intel, and Qualcomm to enable the system to leverage firmware to start the hardware and then shortly after re-initialize the system into a trusted state by using the OS boot loader and processor capabilities to send the system down a well-known and verifiable code path. This mechanism helps limit the trust assigned to firmware and provides powerful mitigation against cutting-edge, targeted threats against firmware. This capability also helps to protect the integrity of the virtualization-based security (VBS) functionality implemented by the hypervisor from firmware compromise. VBS then relies on the hypervisor to isolate sensitive functionality from the rest of the OS which helps to protect the VBS functionality from malware that may have infected the normal OS even with elevated privileges. Protecting VBS is critical since it is used as a building block for important OS security capabilities like Windows Defender Credential Guard which protects against malware maliciously using OS credentials and Hypervisor-protected Code Integrity (HVCI) which ensures that a strict code integrity policy is enforced and that all kernel code is signed and verified.

 

Being able to measure that the device booted securely is another critical piece of this additional layer of protection from firmware compromise that gives admins added confidence that their endpoints are safe. That’s why we implemented Trusted Platform Module 2.0 (TPM) as one of the device requirements for Secured-core PCs. By using the Trusted Platform Module 2.0 (TPM) to measure the components that are used during the secure launch process, we help customers enable zero trust networks using System Guard runtime attestation. Conditional access policies can be implemented based on the reports provided by the System Guard attestation client running in the isolated VBS environment.

In addition to the Secure Launch functionality, Windows implements additional safeguards that operate when the OS is running to monitor and restrict the functionality of potentially dangerous firmware functionality accessible through System Management Mode (SMM).

Beyond the hardware protection of firmware featured in Secured-core PCs, Microsoft recommends a defense-in-depth approach including security review of code, automatic updates, and attack surface reduction. Microsoft has provided an open-source firmware project called Project-Mu that PC manufactures can use as a starting point for secure firmware.

How to get a Secured-core PC

Our ecosystem partnerships have enabled us to add this additional layer of security in devices that are designed for highly-targeted industries and end-users who handle mission-critical data in some of the most data-sensitive industries like government, financial services, and healthcare, right-out-of-the-box. These innovations build on the value of Windows 10 Pro that comes with built-in protections like firewall, secure boot, and file-level information-loss protection which are standard on every device.

More information on devices that are verified Secured-core PC including those from Dell, Dynabook, HP, Lenovo, Panasonic and Surface can be found on our web page.

 

David Weston (@dwizzzleMSFT)
Partner Director, OS Security

The post Microsoft and partners design new device security requirements to protect against targeted firmware attacks appeared first on Microsoft Security.

Finding the signal of community in all the noise at Black Hat

August 16th, 2018 No comments

I dont know about you, but I find large conferences overwhelming. Dont get me wrong, nothing beats the innovative potential of bringing a diverse group of brilliant people together to hash through thorny issues and share insights. But there are so many speakers, booths, and people, it can be a challenge to find the signal in all the noisedid I mention conferences are also really loud?

So last week when I stepped into the first of multiple showrooms at the Mandalay Hotel in Las Vegas for the Black Hat Briefing, I have to admit I felt a little nostalgia for the very first Black Hat Conference. It was 1997 at the old Aladdin Casino in Las Vegas. A casino with a long and colorful history, slated to close a few months after the conference ended. 1997: That was before Facebook and the iPhone, before the cloud. At the time, the RSA Conference was still mostly focused on cryptography, and those of us concerned about security vulnerabilities and how they impacted practitioners day in and day out had few opportunities to just get together and talk. The first Black Hat Briefing was very special. If my memory serves, there were only a couple hundred of us in attendancecompared to thousands todayand through those connections we built a community and an industry.

Building a community was key to creating the information security industry that exists today, and I believe that building community is just as critical now as we face down the new security threats of a cloud-and-edge world, an IoT world. We need the whole defender communitywhite hat hackers, industry, and governmentworking together to protect the security of our customers.

The security research community plays a fundamental role in community-based defense

Over the last few years, Microsoft has been expanding and redefining what makes up our security communityone of the many positive evolutions since that first Black Hat. Like most tech companies, we once believed that any hacker outside of the organization posed a risk, but as weve gotten to know each other through many years of hard-earned trust and collaboration, we, and the security research community, have learned that our values arent so different. Sometimes the only way to make something stronger is to break it. We know we cant on our own find all the gaps and errors in code that lead to vulnerabilities that criminals exploit to steal money and data. We need great minds both inside and outside our organization. Many of those great minds in the security research community collaborate with us through the Microsoft Security Response Center, and Black Hat was the perfect place to announce the subset of those researchers that made our annual Top 100 Security Researchers List.

Image of the Top 100 sign at the Black Hat Conference.

 

We really appreciate the ongoing support from the community and encourage new researchers to report vulnerabilities to the Microsoft Security Response Center and participate in the Microsoft Bounty Program.

It takes a community to protect the security of our customers

As much as Microsoft values the relationship we have with researchers, we also attended Black Hat as industry partners. We want to help educate our peers on notable vulnerabilities and exploits, and share knowledge following major security events. As an example, one of our sessions focused on how Spectre and Meltdown are a wake-up call on multiple dimensions: how we engineer, how we partner, how we react when we find new security vulnerabilities, and how we need to become more coordinated. When I think about what was so exciting about that first conference, this is what comes to mind: those moments when we hear what our partners have learned, share what we know, and build on those insights to strengthen our collective response. The tech industry is increasingly interdependent. Its going to take all of us working together to protect the safety and security of our customers devices and data.

Image of the Black Hat Conference in Las Vegas.

 

But the meeting of the minds at annual security conferences, while important, is not enough. Microsoft also believes that we need a more structured approach to our collaboration. Cybersecurity is not just about threats from hackers and criminal groups; it has increasingly become a situation where we’re facing a cyberweapons arms race with governments attacking users around the world. We know this is a challenge we must pursue with our partners and customers, with a sense of shared responsibility and a focus on constantly making it easier for everyone to benefit from the latest in security advances. Microsoft has been working to help organize the industry in pursuit of this goal.

This past April during the RSA Conference, we came together as initially 34 companies, now 44 companies, and agreed to a new Cybersecurity Tech Accord. In this accord, we all pledge to help protect every customer, regardless of nationality, and will refrain from helping governments attack innocent civilians. It’s a foundationon which we are buildingto take coordinated action and to work with all our partners and many others to strengthen the resilience of the ecosystem for all our customers.

I admit it, I do sometimes miss attending those small, tightly knit conferences of old. But Im even more inspired about the possibilities that I see as we continue to build on these collaborative models. Weve seen a lot of progress recently working with our partners and the security research community. If you listen closely, I think you can hear the signal breaking through.

Partnerships power the future of better security

This post is authored by Jeremy Dallman, Principal Program Manager.

 

Our goal in building the Microsoft Graph Security API is to enable customers to share insights and take action across security solutions to improve protection and speed response. By creating a connected security ecosystem, Microsoft and partners can enable developers to simplify integration and alert correlation, unlock valuable context to aid investigation, and streamline security operations.

Palo Alto Networks shares the vision of enabling better integration to benefit our joint customers. They are a member of Microsoft Intelligent Security Association and as part of the Graph Security API launch at RSA, we showcased an application that demonstrated the power of integration between multiple Microsoft and Palo Alto Networks security offerings. We demonstrated how a Palo Alto Networks provider for the Security Graph can prevent successful cyberattacks by correlating alerts from Microsoft with its threat intelligence, firewall logs, and automated firewall policy changes.

Microsoft Graph Security API proof of concept integration using PowerBI

Our close collaboration continues and this week at the Palo Alto Networks user conference, Ignite 2018, we will unveil the latest joint innovation. Microsoft and Palo Alto Networks have worked to connect the Microsoft Graph Security API and the Palo Alto Networks Application Framework with a provider that brokers interactions between the two platforms. We will also demo a Microsoft PowerBI solution that accesses information from both the Palo Alto Networks Application Framework and the Microsoft Graph Security API giving our customers the ability to query and access all of their security data through a common interface.

For those attending Ignite this week, be sure to join the Wednesday (5/23) 4:00PM session where Jason Wescott and Francesco Vigo will discuss the collaboration between Microsoft Graph Security API and the Palo Alto Networks Application Framework. If you arent at Ignite, visit the Graph Security API documentation or sign up to request access to the Palo Alto Networks Application Framework API to start exploring how you can take advantage of this powerful collaboration!

Categories: cybersecurity, Security Development Tags:

Secure Development Blog

We’re proud to announce Secure Development at Microsoft, our developer focused security blog at Microsoft. The blog was created to inform developers of new security tools, services, open source projects and best development practices in order to help instill a security mindset across the development community and enable cross collaboration amongst its members.

Blog posts will be written by Microsoft engineers to give developers the right level of technical depth in order to get them up and running with integrating security assurance into their projects right away. We’ll cross reference their posts to make sure anyone following this blog can also check out the technical side of what we do.

Check them out!

Categories: Security Development Tags:

What’s New with Microsoft Threat Modeling Tool 2016

October 8th, 2015 No comments

Threat modeling is an invaluable part of the Security Development Lifecycle (SDL) process. We have discussed in the past how applying a structured approach to threat scenarios during the design phase of development helps teams more effectively and less expensively identify security vulnerabilities, determine risks from those threats, and establish appropriate mitigations.

The Microsoft Threat Modeling Tool 2016 is a free tool to help you find threats in the design phase of software projects.  It’s available as a free download from the Microsoft Download Center.  This latest release simplifies working with threats and provides a new editor for defining your own threats.  Microsoft Threat Modeling Tool 2016 has several improvements.

  • New Threat Grid
  • Template Editor
  • Migrating Existing Data Flow Diagrams

New Threat Grid

The threat grid has been overhauled.  Now you can sort and filter on any column.  You can easily filter the grid to show threats for any flow.  You can sort on the interaction column if you want to group all the threats for each flow.  You can sort on the changed by column if you want to find that threat you just edited.
tmt2016_01

Template Editor

Microsoft Threat Modeling Tool 2016 comes with a base set of threat definitions using STRIDE categories. This set includes only suggested threat definitions and mitigations which are automatically generated to show potential security vulnerabilities for your data flow diagram. To offer more flexibility, Microsoft Threat Modeling Tool 2016 gives users the option to add their own threats related to their specific domain. This means users can extend the base set of threat definitions using the template editor.
tmt2016_02

The template editor also allows users to modify the stencils available on the drawing surface.  If you have a stencil you would like to make available for your DFDs, you can add it.  If you need another stencil property, you can add that.
tmt2016_03

Migrating Existing Data Flow Diagrams

Threat modeling is an iterative process. Development teams create threat models which evolve over time as systems and threats change. We wanted to make sure the new version supports this flow. Microsoft Threat Modeling Tool 2016 will load any threat model from Microsoft Threat Modeling Tool 2014, in the .tm4 format. Threat models created with v3 version of the tool (.tms format) must be migrated to the Microsoft Threat Modeling Tool 2014 format (.tm4) before they can be loaded in Microsoft Threat Modeling Tool 2016.  Microsoft Threat Modeling Tool 2014 offers a migration tool for threat models created with version 3.1.8. (NOTE: For migrating threat models from v3.1.8 only, Microsoft Visio 2007 or later is required).

Additional Information

We hope these new enhancements in Microsoft Threat Modeling Tool 2016 will provide greater flexibility and help enable you to effectively implement the SDL process in your organization.

Thank you to all who helped in shipping this release through internal and external feedback. Your input was critical to improving the tool and customer experience.

For more information and additional resources, visit:

 

Alex Armanasu is an Engineer on the Secure Development Tools team at Microsoft. He’s responsible for the Threat Modeling component of the Security Development Lifecycle (SDL).