Archive

Archive for the ‘Voice of the Community’ Category

Align your security and network teams to Zero Trust security demands

January 10th, 2022 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Jennifer Minella, Founder and Principal Advisor on Network Security at Viszen Security about strategies for aligning the security operations center (SOC) and network operations center (NOC) to meet the demands of Zero Trust and protect your enterprise.

Natalia: In your experience, why are there challenges bringing together networking and security teams?

Jennifer: Ultimately, it’s about trust. As someone who’s worked on complex network-based security projects, I’ve had plenty of experience sitting between those two teams. Often the security teams have an objective, which gets translated into specific technical mandates, or even a specific product. As in, we need to achieve X, Y, and Z level security; therefore, the networking team should just go make this product work. That causes friction because sometimes the networking team didn’t get a voice in that.

Sometimes it’s not even the right product or technology for what the actual goal was, but it’s too late at that point because the money is spent. Then it’s the networking team that looks bad when they don’t get it working right. It’s much better to bring people together to collaborate, instead of one team picking a solution.

Natalia: How does misalignment between the SOC and NOC impact the business?

Jennifer: When there’s an erosion of trust and greater friction, it makes everything harder. Projects take longer. Decisions take longer. That lack of collaboration can also introduce security gaps. I have several examples, but I’m going to pick healthcare here. Say the Chief Information Security Officer’s (CISO) team believes that their bio-medical devices are secured a certain way from a network perspective, but that’s not how they’re secured. Meaning, they’re secured at a lower level that would not be sufficient based on how the CISO and the compliance teams were tracking it. So, there’s this misalignment, miscommunication. Not that it’s malicious; nobody is doing it on purpose, but requirements aren’t communicated well. Sometimes there’s a lack of clarity about whose responsibility it is, and what those requirements are. Even within larger organizations, it might not be clear what the actual standards and processes are that support that policy from the perspective of governance, risk, and compliance (GRC).

Natalia: So, what are a few effective ways to align the SOC and NOC?

Jennifer: If you can find somebody that can be a third partysomebody that’s going to come in and help the teams collaborate and build trustit’s invaluable. It can be someone who specializes in organizational health or a technical third party; somebody like me sitting in the middle who says, “I understand what the networking team is saying. I hear you. And I understand what the security requirements are. I get it.” Then you can figure out how to bridge that gap and get both teams collaborating with bi-directional communication, instead of security just mandating that this thing gets done.

It’s also about the culturethe interpersonal relationships involved. It can be a problem if one team is picked (to be in charge) instead of another. Maybe it’s the SOC team versus the NOC team, and the SOC team is put in charge; therefore, the NOC team just gives up. It might be better to go with a neutral internal person instead, like a program manager or a digital-transformation leadersomebody who owns a program or a project but isn’t tied to the specifics of security or network architecture. Building that kind of cross-functional team between departments is a good way to solve problems.

There isn’t a wrong way to do it if everybody is being heard. Emails are not a great way to accomplish communication among teams. But getting people together, outlining what the goal is, and working towards it, that’s preferable to just having discrete decision points and mandates. Here’s the big goalwhat are some ideas to get from point A to point B? That’s something we must do moving into Zero Trust strategies.

Natalia: Speaking of Zero Trust, how does Zero Trust figure into an overarching strategy for a business?

Jennifer: I describe Zero Trust as a concept. It’s more of a mindset, like “defense in depth,” “layered defense,” or “concepts of least privilege.” Trying to put it into a fixed model or framework is what’s leading to a lot of the misconceptions around the Zero Trust strategy. For me, getting from point A to point B with organizations means taking baby stepsidentifying gaps, use cases, and then finding the right solutions.

A lot of people assume Zero Trust is this granular one-to-one relationship of every element on the network. Meaning, every user, every endpoint, every service, and application data set is going to have a granular “allow or deny” policy. That’s not what we’re doing right now. Zero Trust is just a mindset of removing inherent trust. That could mean different things, for example, it could be remote access for employees on a virtual private network (VPN), or it could be dealing with employees with bring your own device (BYOD). It could mean giving contractors or people with elevated privileges access to certain data sets or applications, or we could apply Zero Trust principles to secure workloads from each other.

Natalia: And how does Secure Access Service Edge (SASE) differ from Zero Trust?

Jennifer: Zero Trust is not a product. SASE, on the other hand, is a suite of products and services put together to help meet Zero Trust architecture objectives. SASE is a service-based product offering that has a feature set. It varies depending on the manufacturer, meaning, some will give you these three features and some will give you another five or eight. Some are based on endpoint technology, some are based on software-defined wide area network (SD-WAN) solutions, while some are cloud routed.

Natalia: How does the Zero Trust approach fit with the network access control (NAC) strategy?

Jennifer: I jokingly refer to Zero Trust as “NAC 4.0.” I’ve worked in the NAC space for over 15 years, and it’s just a few new variables. But they’re significant variables. Working with cloud-hosted resources in cloud-routed data paths is fundamentally different than what we’ve been doing in local area network (LAN) based systems. But if you abstract thatthe concepts of privilege, authentication, authorization, and data pathsit’s all the same. I lump the vendors and types of solutions into two different categories: cloud-routed versus traditional on-premises (for a campus environment). The technologies are drastically different between those two use cases. For that reason, the enforcement models are different and will vary with the products. 

Natalia: How do you approach securing remote access with a Zero Trust mindset? Do you have any guidelines or best practices?

Jennifer: It’s alarming how many organizations set up VPN remote access so that users are added onto the network as if they were sitting in their office. For a long time that was accepted because, before the pandemic, there was a limited number of remote users. Now, remote access, in addition to the cloud, is more prevalent. There are many people with personal devices or some type of blended, corporate-managed device. It’s a recipe for disaster.

The threat surface has increased exponentially, so you need to be able to go back in and use a Zero Trust product in a kind of enclave model, which works a lot like a VPN. You set up access at a point (wherever the VPN is) and the users come into that. That’s a great way to start and you can tweak it from there. Your users access an agent or a platform that will stay with them through that process of tweaking and tuning. It’s impactful because users are switching from a VPN client to a kind of a Zero Trust agent. But they don’t know the difference because, on the back end, the access is going to be restricted. They’re not going to miss anything. And there’s lots of modeling engines and discovery that products do to map out who’s accessing what, and what’s anomalous. So, that’s a good starting point for organizations.

Natalia: How should businesses think about telemetry? How can security and networking teams best use it to continue to keep the network secure?

Jennifer: You need to consider the capabilities of visibility, telemetry, and discovery on endpoints. You’re not just looking at what’s on the endpointwe’ve been doing thatbut what is the endpoint talking to on the internet when it’s not behind the traditional perimeter. Things like secure web gateways, or solutions like a cloud access security broker (CASB), which further extends that from an authentication standpoint, data pathing with SD-WAN routing—all of that plays in.

Natalia: What is a common misconception about Zero Trust?

Jennifer: You don’t have to boil the ocean with this. We know from industry reports, analysts, and the National Institute of Standards and Technology (NIST) that there’s not one product that’s going to meet all the Zero Trust requirements. So, it makes sense to chunk things into discrete programs and projects that have boundaries, then find a solution that works for each. Zero Trust is not about rip and replace.

The first step is overcoming that mental hurdle of feeling like you must pick one product that will do everything. If you can aggregate that a bit and find a product that works for two or three, that’s awesome, but it’s not a requirement. A lot of organizations are trying to research everything ad nauseum before they commit to anything. But this is a volatile industry, and it’s likely that with any product’s features, the implementation is going to change drastically over the next 18 months. So, if you’re spending nine months researching something, you’re not going to get the full benefit in longevity. Just start with something small that’s palatable from a resource and cost standpoint.

Natalia: What types of products work best in helping companies take a Zero Trust approach?

Jennifer: A lot of requirements stem from the organization’s technological culture. Meaning, is it on-premises or a cloud environment? I have a friend that was a CISO at a large hospital system, which required having everything on-premises. He’s now a CISO at an organization that has zero on-premises infrastructure; they’re completely in the cloud. It’s a night-and-day change for security. So, you’ve got that, combined with trying to integrate with what’s in the environment currently. Because typically these systems are not greenfield, they’re brownfield—we’ve got users and a little bit of infrastructure and applications, and it’s a matter of upfitting those things. So, it just depends on the organization. One may have a set of requirements and applications that are newer and based on microservices. Another organization might have more on-premises legacy infrastructure architectures, and those aren’t supported in a lot of cloud-native and cloud-routed platforms.

Natalia: So, what do you see as the future for the SOC and NOC?

Jennifer: I think the message moving forward is—we must come together. And it’s not just networking and security; there are application teams to consider as well. It’s the same with IoT. These are transformative technologies. Whether it’s the combination of operational technology (OT) and IT, or the prevalence of IoT in the environment, or Zero Trust initiatives, all of these demand cross-functional teams for trust building and collaboration. That’s the big message.

Learn more

Get key resources from Microsoft Zero Trust strategy decision makers and deployment teams. To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Align your security and network teams to Zero Trust security demands appeared first on Microsoft Security Blog.

What you need to know about how cryptography impacts your security strategy

January 4th, 2022 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest post of our Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Taurus SA Co-founder and Chief Security Officer Jean-Philippe “JP” Aumasson, author of “Serious Cryptography.” In this blog post, JP shares insights on learning and applying cryptography knowledge to strengthen your cybersecurity strategy.

Natalia: What drew you to the discipline of cryptography?

JP: People often associate cryptography with mathematics. In my case, I was not good at math when I was a student, but I was fascinated by the applications of cryptography and everything that has to do with secrecy. Cryptography is sometimes called the science of secrets. I was also interested in hacking techniques. At the beginning of the internet, I liked reading online documentation magazines and playing with hacking tools, and cryptography was part of this world.

Natalia: In an organization, who should be knowledgeable about the fundamentals of cryptography?

JP: If you had asked me 10 to 15 years ago, I might have said all you need is to have an in-house cryptographer who specializes in crypto and other people can ask them questions. Today, however, cryptography has become substantially more integrated into several components that we work with and those engineers must develop.

The good news is that crypto is far more approachable than it used to be, and is better documented. The software libraries and APIs are much easier to work with for non-specialists. So, I believe that all the engineers who work with software—from a development perspective, a development operations (DevOps) perspective, or even quality testing—need to know some basics of what crypto can and cannot do and the main crypto concepts and tools.

Natalia: Who is responsible for educating engineering on cryptography concepts?

JP: It typically falls on the security team—for example, through security awareness training. Before starting development, you create the functional requirements driven by business needs. You also define the security goals and security requirements, such as personal data, that must be encrypted at rest and in transit with a given level of security. It’s truly a part of security engineering and security architecture. I advocate for teaching people fundamentals, such as confidentiality, integrity, authentication, and authenticated encryption.

As a second step, you can think of how to achieve security goals thanks to cryptography. Concretely, you have to protect some data, and you might think, “What does it mean to encrypt the data?” It means choosing a cipher with the right parameters, like the right key size. You may be restricted by the capability of the underlying hardware and software libraries, and in some contexts, you may have to use Federal Information Processing Standard (FIPS) certified algorithms.

Also, encryption may not be enough. Most of the time, you also need to protect the integrity of the data, which means using an authentication mechanism. The modern way to realize this is by using an algorithm called an authenticated cipher, which protects confidentiality and authenticity at the same time, whereas the traditional way to achieve this is to combine a cipher and a message authentication code (MAC).

Natalia: What are common mistakes practitioners tend to make?

JP: People often get password protection wrong. First, you need to hash passwords, not encrypt them—except in some niche cases. Second, to hash passwords you should not use a general-purpose hash function such as SHA-256 or BLAKE2. Instead, you should use a password hashing function, which is a specific kind of hashing algorithm designed to be slow and sometimes use a lot of memory, to make password cracking harder.

A second thing people tend to get wrong is authenticating data using a MAC algorithm. A common MAC construction is the hash-based message authentication code (HMAC) standard. However, people tend to believe that HMAC means the same thing as MAC. It’s only one possible way to create a MAC, among several others. Anyway, as previously discussed, today you often won’t need a MAC because you’ll be using an authenticated cipher, such as AES-GCM.

Natalia: How does knowledge of cryptography impact security strategy?

JP: Knowledge of cryptography can help you protect the information more cost-effectively. People can be tempted to put encryption layers everywhere but throwing crypto at a problem does not necessarily solve it. Even worse, once you choose to encrypt something, you have a second problem—key management, which is always the hardest part of any cryptographic architecture. So, knowing when and how to use cryptography will help you achieve sound risk management and minimize the complexity of your systems. In the long run, it pays off to do the right thing.

For example, if you generate random data or bytes, you must use a random generator. Auditors and clients might be impressed if you tell them that you use a “true” hardware generator or even a quantum generator. These might sound impressive, but from a risk management perspective, you’re often better off using an established open-source generator, such as that of the OpenSSL toolkit.

Natalia: What are the biggest trends in cryptography?

JP: One trend is post-quantum cryptography, which is about designing cryptographic algorithms that would not be compromised by a quantum computer. We don’t have quantum computers yet, and the big question is when, if ever, will they arrive? Post-quantum cryptography consequently, can be seen as insurance.

Two other major trends are zero-knowledge proofs and multi-party computation. These are advanced techniques that have a lot of potential to scale decentralized applications. For example, zero-knowledge proofs can allow you to verify that the output of a program is correct without re-computing the program by verifying a short cryptographic proof, which takes less memory and computation. Multi-party computation, on the other hand, allows a set of parties to compute the output of a function without knowing the input values. It can be loosely described as executing programs on encrypted data. Multi-party computation is proposed as a key technology in managed services and cloud applications to protect sensitive data and avoid single points of failure.

One big driver of innovation is the blockchain space, where zero-knowledge proofs and multi-party computation are being deployed to solve very real problems. For example, the Ethereum blockchain uses zero-knowledge proofs to improve the scalability of the network, while multi-party computation can be used to distribute the control of cryptocurrency wallets. I believe we will see a lot of evolution in zero-knowledge proofs and multi-party computation in the next 10 to 20 years, be it in the core technology or the type of application.

It would be difficult to train all engineers in these complex cryptographic concepts. So, we must design systems that are easy to use but can securely do complex and sophisticated operations. This might be an even bigger challenge than developing the underlying cryptographic algorithms.

Natalia: What’s your advice when evaluating new cryptographic solutions?

JP: As in any decision-making process, you need reliable information. Sources can be online magazines, blogs, or scientific journals. I recommend involving cryptography specialists to:

  1. Gain a clear understanding of the problem and the solution needed.
  2. Perform an in-depth evaluation of the third-party solutions offered.

For example, if a vendor tells you that they use a secret algorithm, it’s usually a major red flag. What you want to hear is something like, “We use the advanced encryption standard with a key of 256 bits and an implementation protected against side-channel attacks.” Indeed, your evaluation should not be about the algorithms, but how they are implemented. You can use the safest algorithm on paper, but if your implementation is not secure, then you have a problem.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post What you need to know about how cryptography impacts your security strategy appeared first on Microsoft Security Blog.

Your guide to mobile digital forensics

December 14th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Cellebrite Senior Director of Digital Intelligence Heather Mahalik. In this blog post, Heather talks about digital forensics, from technical guidance to hiring best practices, with a special focus on mobile forensics. 

Natalia: What is digital forensics and why is it important?

Heather: Cybersecurity is more about prevention, protection, and defense. Digital forensics is the response and is typically triggered by an incident. There are some people who say, “Oh no, we do things proactively.” For example, someone might be traveling to a foreign country, and they want to know if something is going to land on their mobile device. You may proactively scan or carry out forensics on that device before and then see what changed after. That would be a rare situation, but usually, it’s when an incident occurs and you need someone to come in and clean it up.

Natalia: How does mobile forensics differ from other forms of forensics?

Heather: Mobile forensics is fast-moving. Mobile device companies update devices and operating systems all the time. The applications we rely upon are updating. When I did digital forensics as a whole—computers, PC, and macOS—the updates weren’t the same as on mobile. There are also levels and encryption that keep us out, and they are different on every mobile device.

When I learned forensics in 2002, it was: “Here’s a hard drive. This is how the data is laid out. This is what you can expect every single time.” You can never expect the same thing every single time with mobile forensics. In every single case you work on, there will be a variance that requires you to learn something new. I love it because I can’t get bored, but it’s also frustrating. It’s so hard to say, “OK, I’m now a master.” You’re never a master of mobile forensics.

Natalia: What does the workflow for mobile forensics look like?

Heather: I always use the terminology cradle-to-grave forensics—you get it when it first starts, and you put it to rest with your report. If you are doing beginning to end, you’re starting with the mobile device in front of you. One thing to consider is remote access, which can be good and bad. Some of the third-party applications require that a device connects to a network to extract information, but that goes against everything you’ll read about forensics. Isolate from a network. Make sure it’s protected. No connections to the device.

The next phase is to acquire the data from the device, and there are many different tools and methods to do that. You need as much access to that file system as you can get because we need all the logs in the background to do a thorough analysis.

After that, I recommend triage. Consider how you’re going to solve the who, what, where, when, why, and how. Are there any clues that you can get immediately from that device? Then dive in deeper with your forensics and analytical tools.

Natalia: What’s the best way to approach an investigation?

Heather: There was a study where they had people work on the same case in different ways. One person was given the whole case scenario—“This is what we think happened”—and another person was just asked specific questions—“Please find these things.” In the middle is the best—“We are trying to solve for X. These are the questions that I think will help us get to X. Can you answer them?”

If other people start shooting holes in your report, you need additional evidence, and that’s usually what will force validation. If someone sees that report and they’re not fighting it, it’s because they know that it’s the truth.

Natalia: What common mistakes do forensics investigators make?

Heather: The biggest mistake I see is trusting what a forensics tool reports without validating the evidence. Think about your phone. Did the artifact sync from a computer that your roommate is using and now it’s on your phone? Is it a suggestion, like when you’re typing into a search browser and it makes recommendations? Is it a shared document that you didn’t edit? There are all these considerations of how the evidence got there. You should not go from extracting a phone to reporting. There is a big piece in between. Verify and validate with more than one method and tool before you put it in your report.

Natalia: Are forensics investigation teams typically in-house or consultants?

Heather: There could be both. It depends on how frequently you need someone. I’ve been a consultant to big companies that offer incident response services. They don’t typically see mobile incidents, so they wanted me there just in case. If you do hire one person, don’t expect them to be a master of mobile, macOS, PC, and network security.

If you’re doing incident response investigations, you want someone with incident response, memory forensics, and network forensics experience. In the environments I’ve been in, we need dead disk forensics experience, so we need people who are masters of PC, macOS, and mobile because it’s usually data at rest that’s collected. It’s more terrorism and crime versus ransomware and hacking. You must weigh what you’re investigating, and if it’s all those things—terrorism/crime and ransomware/hacking —you need a forensics team because it’s rare that people are on both sides of that spectrum and really good at both.

Natalia: What advice would you give a security leader looking to hire and manage a forensics team?

Heather: When hiring people, question what they know. I’ve worked at many places where I was on the hiring team, and someone would say, “If they have X certification, they can skip to the next level.” Just because I don’t have a certification doesn’t mean I don’t know it. You also don’t know how someone scored. Make sure it’s a good cultural fit as well because with what we do in forensics, you need to rely on your teammates to get you through some of the things you come across.

When it comes to skill-building, I recommend encouraging your team to play in any free Capture the Flag provided by vendors, like SANS Institute. An employer could even put people together and say, “I want you three to work together and see how you do.” Letting your employees take training that inspires them and makes them want to keep learning is important.

Natalia: I appreciate you mentioning the difficulties of the role. It’s important to openly discuss the mental health challenges of being an investigator. How do you handle what you find in your investigations? And how do tools, like DFIR review, help?

Heather: I lean on my coworkers a lot. Especially if it’s a big case—like a missing person, someone going to trial, or someone losing their job—it’s a lot of pressure on you. You need people who understand that pressure and help you leave it behind because if it’s constantly going through your mind, it’s not healthy.

Digital Forensics and Incident Response (DFIR) review came out about two years ago. I have put many of my whitepapers and research through the deeper review process because it’s a group of other experts that validate your work. That makes a lot of organizations feel comfortable. “I know this device was wiped on X date and someone tried to cover their tracks because Heather wrote a paper, and it was peer-reviewed, and it got the gold seal.” That relieves a lot of pressure.

Learn more

Explore Microsoft’s technical guidance to help build and implement cybersecurity strategy and architecture.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Your guide to mobile digital forensics appeared first on Microsoft Security Blog.

How to assess and improve the security culture of your business

November 11th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Cygenta Co-founder and Co-Chief Executive Officer Dr. Jessica Barker, author of “Confident Cyber Security: How to Get Started in Cyber Security and Futureproof Your Career” and co-author of “Cybersecurity ABCs: Delivering awareness, behaviors and culture change.” In this blog post, Jessica talks about how to build a security culture.

Natalia: How are most organizations doing? What is the state of cybersecurity culture?

Jessica: It varies—a lot of it comes down to resourcing and the emphasis placed on security from the leadership level. It can also come down to the experiences of the security team, security leadership, and the organization in terms of security incidents or near-misses. We’ve seen a lot of improvement in recent years, and that’s largely because there’s more awareness among leaders that security culture is important. Just 5 years ago, but particularly 10 years ago, there was very little discussion around culture and security culture.

Every year, we ask ClubCISO, a private group for senior information security professionals and security leaders, about security. For the last three years, they said security culture is their number one hot topic for the year ahead. They even said it in March 2021, tying with cloud. When I think about the year that cloud has had and the forced digital transformation many organizations have been through, it speaks volumes that security culture is as important of a priority as securely moving to the cloud.

Natalia: What does a cybersecurity culture assessment entail?

Jessica: In a cybersecurity culture assessment, we listen to the organization and the people who work there and understand security assumptions. When I speak to people about security culture, there’s often this idea that it is about how people behave, and that if we collect metrics around phishing, for example, it will tell us about the security culture. However, that will tell us something superficial. It’ll tell us what people are doing, not why they’re doing it.

Understanding the “why” is absolutely crucial because that’s your point of influence to change behavior. The “why” helps in understanding underlying assumptions and determining what you can do if there are gaps between what the security team wants and what people are doing.

The first stage is to understand the organizational culture, mission, and values and review the cultural symbols in the organization, including the branding, training, and messaging. Then, we run surveys, focus groups, and one-on-one interviews to encourage conversation, facilitate discussion, and understand what’s happening on a day-to-day basis, and most importantly, why.

Natalia: What are the indicators that a company needs a cybersecurity culture assessment?

Jessica: One prompt for most of our clients is that they feel like they need to do more to manage human risk, but they don’t know what. There may be incidents or near-misses. There may be indications around phishing or how people are managing passwords. There may be behavioral indicators—what they want from the people in the organization doesn’t match reality. Another key prompt is not understanding why their current culture isn’t developing in the way that they would want. Often, the organizations will have tried to deal with this in one way or another through awareness-raising, and there’s frustration because they’re telling people what to do, and they’re still not doing it. It takes a level of maturity, and it often takes organizations that aspire to be people-centric, to help their workforce be more security-conscious.

We measure security culture by gathering a lot of qualitative data to understand why people are doing what they’re doing. It goes back to the classic “start with why,” and then crunching numbers from surveys. We use grounded theory to qualify the data we get back. We immerse ourselves in that data and identify patterns. We also use anonymous quotes, comments, and keywords from workshops, focus groups, and one-on-one interviews to bring that story to life.

Natalia: What are typical challenges to establishing a positive security culture?

Jessica: I’m working with a financial services client that has a very positive organizational culture and lives by their values. But there have been challenges around security culture in this organization for many reasons, including fast digital transformation and growth. It’s taken them until this year to understand what a security culture means for their organization.

Because the people who work there felt loyalty to the organization, they wanted to behave in a secure way. They understood the importance of it, but there were blockers, including a lack of communication on why certain security controls were in place. It’s an entrepreneurial organization that moves quickly, so there were underlying cultural influences encouraging people to behave in less secure ways while prioritizing productivity. We’ve been undertaking a program to help the security team better communicate the “why,” and the organization has been receptive to it.

It’s also very hard to change behavior if the security leadership or organizational leadership team is not on board. Another consideration is the perception of a just culture. If somebody clicks a malicious link or makes a mistake, do they feel that they can put their hand up and report it without being unduly blamed? If people have a perception that the culture is about retribution and “pointing the finger,” that’s damaging to security culture.

Natalia: What’s the biggest mistake organizations make when trying to build and foster a security culture?

Jessica: To try to build a security culture that is not aligned with the business culture. One organization I worked with a few years ago was a very positive and people-centric healthcare organization. They were always seeking to say, “Yes,” to people in their wider organizational culture, but the security team was pushing a security culture that said, “No,” and was perceived as the “Department of No,” like many security teams. That’s a really common problem because the organizational culture will always win out, and if you try and bolt on a security culture that runs against the wider organization, it won’t work.

Often, the organizational culture of a company is not prepared to build a positive cybersecurity culture, and change requires patience. It’s a slow journey. That kind of client isn’t ready for a security culture assessment, so the work focuses on influencing the senior leadership to show them the importance of security culture. When organizations want a security culture assessment, that’s when they’re ready for it.

Natalia: How does the psychological well-being of the security team impact the security culture?

Jessica: At one organization, there was a lack of communication around security. The security team was so stressed, burnt-out, busy, and overworked that they didn’t have time to engage with their colleagues in the rest of the business. It led to the impression that the security team was not friendly or approachable, and it created a barrier to a positive security culture. Taking care of the well-being of the security function is fundamental.

To immediately improve the well-being of their team, managers can talk about the issues. If you’re comfortable doing so, this can include talking about your own mental well-being or acknowledging burnout stress and impostor syndrome. These are real issues in the industry, and it can be a relief for people to hear that they’re not alone and to have this safe space. It makes everyone feel more comfortable saying, “Hey, I need a day off for my mental health.” Mental health days are crucial in organizations, but leadership must show that they’re a priority.

Natalia: Besides an assessment, how can security teams improve their understanding of human risk?

Jessica: Behavioral economics, neuroscience, and psychology are all disciplines that can teach us about the human side of security and security culture. I’d recommend books like “Nudge,” “Thinking Fast and Slow”, and the work of Tali Sharot, a neuroscientist, whose work on the optimism bias is very relevant to security. There’s also a lot of great work being done in academia on security culture—papers and research that are advancing the field. It was interesting as well to see this year that Verizon did a shout-out to security culture for the first time in their data breach investigation report. Security culture is going more mainstream and is now higher up on the agenda in the security profession.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How to assess and improve the security culture of your business appeared first on Microsoft Security Blog.

Practical tips on how to use application security testing and testing standards

October 5th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Daniel Cuthbert, Global Head of Security Research at Banco Santander. Daniel discusses how to use application security testing and testing standards to improve security.

Natalia: What is an application security test and what does it entail?

Daniel: Let’s say I have a traditional legacy banking application. Users can sign in using their web browser to gain access to financial details or funds, move money around, and receive money. Normally, when you want an application assessment done for that type of application, you’re looking at the authentication and authorization processes, how the application architecture works, how it handles data, and how the user interacts with it. As applications have grown from a single application that interacts with a back-end database to microservices, all the ways that data is moved around and installed—and the processes—become more important.

Generally, an application test makes sure that at no point can somebody gain unauthorized access to data or somebody else’s money. And we want to make sure that an authorized user can’t impersonate another user, gain access to somebody else’s funds, or cause a system in the architecture to do something that the developers or engineers never expected to happen.

Natalia: What is the Open Web Application Security Project (OWASP) Application Security Verification Standard (ASVS), and how should organizations be using the standard?

Daniel: ASVS stands for Application Security Verification Standard1. The idea was to normalize how people conduct and receive application security tests. Prior to it, there was no methodology. There was a lot of ambiguity in the industry. You’d say, “I need an app test done,” and you’d hope that the company you chose had a methodology in place and the people doing the assessment were capable of following a methodology.

In reality, that wasn’t the case. It varied across various penetration test houses. Those receiving consultancy for penetration tests and application tests didn’t have a structured idea of what should be tested or what constituted a secure robust application. That’s where the ASVS comes in. Now you can say, “I need an application test done. I want a Level 2 assessment of this application.” The person receiving the test knows exactly what they’re expecting, and the person doing the test knows exactly what the client is expecting. It gets everybody on the same page, and that’s what we were missing before.

Natalia: How should companies prioritize and navigate the ASVS levels and controls?

Daniel: When they first look at the ASVS, many people get intimidated and overwhelmed. First, stay calm. The three levels are there as a guideline. Level 1 should be the absolute bare minimum. That’s the entrance to play if you’re putting an application on the Internet, and we try to design Level 1 to be capable of being automated. As far as tools to automate Level 1, OWASP Zed Attack Proxy (ZAP) is getting there. In 2021, an application should be at Level 2, especially if we take into consideration privacy. Level 3 is unique. Most people never need Level 3, which was designed for applications that are critical and have a strong need for security—where if it goes down, there’s a loss of life or massive impact. Level 3 is expensive and time-consuming, but you expect that if it’s, say, a power plant. You don’t want it to be quickly thrown together in a couple of hours.

With all the levels, you don’t have to go through every single control; this is where threat modeling comes in. If your application makes use of a back-end database, and you have microservices, you take the parts that you need from Level 2 and build your testing program. Many people think that you have to test every single control, but you don’t. You should customize it as much as you need.

Natalia: What’s the right cadence for conducting application security tests?

Daniel: The way we build applications has changed drastically. Ten years ago, a lot of people were doing the waterfall approach using functional specifications like, “I want to build a widget to sells shoes.” Great. Somebody gives them money and time. Developers go develop, and toward the end, they start going through functional user acceptance testing (UAT) and get somebody to do a penetration test. Worst mistake ever. In my experience, we’d go live on Monday, and the penetration test would happen the week before.

What we’ve seen with the adoption of agile is the shifting left of the software development lifecycle (SDLC). We’re starting to see people think about security not only as an add-on at the end but as part of the function. We expect the app to be secure, usable, and robust. We’re adopting security standards. We’re adopting the guardrails for our continuous integration and continuous delivery pipeline. That means developers write a function, check the code into Git, or whatever repository, and the code is checked that it’s robust, formatted correctly, and secure. In the industry, we’re moving away from relying on that final application test to constantly looking during the entire lifecycle for bugs, misconfigurations, or incorrectly used encryption or encoding.

Natalia: What common mistakes do companies make that impact the results of an application security assessment?

Daniel: The first one is companies not embracing the lovely world of threat modeling. A threat model can save you time and give you direction. When people bypass the fundamental stage of threat modeling, they’re burning cycles. If you adopt the threat model and say, “This is every single way some bad person is going to break our favorite widget tool,” then you can build upon that.

The second mistake is not understanding what all the components do. We no longer build applications that are a single web server, Internet Information Services (IIS), or NGINX that is stored in the database. It’s rare to see that today. Today’s applications are complex. Because multiple teams are responsible for individual parts of that process, they don’t all work together to understand simple things like the data flow. Where’s the data held? How does this application process that data? Often, everyone assumes the other team is doing it. This is a problem. Either the scrum master or product owner should own full visibility of the application, especially if it’s a large project. But it varies depending on the organization. We’re not in a mature enough stage yet for it to be a defined role.

Also, the gap between security and development is still too wide. Security didn’t make many friends. We were constantly belittling developers. I was part of that, and we were wrong. At the moment, we’re trying to bridge the two teams. We want developers to see that security is trying to help them.

We should be building a way for developers to be as creative and cool as we expect them to be while setting guardrails to stop common mistakes from appearing in the code pipeline. It’s very hard to write secure code, but we can embrace the fourth generation of continuous integration and continuous delivery (CI/CD). Check your code in; then do a series of tests. Make sure that at that point and at that commit, the code is as robust, secure, and proper as it should be.

Natalia: How should the security team work with developers to protect against vulnerabilities?

Daniel: I don’t expect developers to understand all the latest vulnerabilities. That’s the role of the security or security engineering team. As teams mature, the security engineering or security team acts as the go-to bridge; they understand the vulnerabilities and how they’re exploited, and they translate that into how people are building code for their organization. They’re also looking at the various tools or processes that could be leveraged to stop those vulnerabilities from becoming an issue.

One of the really cool things that I’m starting to see with GitHub is GitHub insights. Let’s say there’s a large organization that has thousands of repositories. You’ll probably see a common pattern of vulnerabilities if you looked at all those repositories. We’re getting to the stage where we’re going to have a “Minority Report” style function for security.

On a monthly basis, I can say, “Show me the teams that are checking in bugs—let’s say deserialization.” I want to understand a problem before it becomes a major one and work with those teams to say, “Of the last 10 arguments, 4 of them have been flagged as being vulnerable for deserialization bugs. Let’s sit down and understand how you’re building, what you’re building toward, and what frameworks you’re trying to adopt. Can we make better tools for you to protect against the vulnerability? Do you need to understand the vulnerability itself?” The tools, pipelines, and education are out there. We can start being that bridge.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

 


1OWASP Application Security Verification Standard, OWSAP.

The post Practical tips on how to use application security testing and testing standards appeared first on Microsoft Security Blog.

Cybersecurity’s next fight: How to protect employees from online harassment

August 25th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Leigh Honeywell, CEO and Co-founder of Tall Poppy, which builds tools and services to help companies protect their employees from online harassment and abuse. In this blog, Leigh talks about company strategies for fighting online harassment.

Natalia: What are some examples of online harassment experienced in the workplace?

Leigh: Online harassment breaks down into two types. The first is harassment related to your job. One example of this would be that an ex-employee has a conflict with the company and is harassing former colleagues. In other cases, it has to do with a policy decision or a moderation decision that the company made, resulting in people within the organization experiencing harassment.

The other type of harassment has nothing to do with somebody’s day job. For instance, an employee had a bad breakup and their ex is bothering them at work. It’s not strictly related to the employee’s day-to-day work, but it’s going to impact their ability to be present at work and participate in work life. Many folks who are dealing with harassment—whether related to work or not—experience lost productivity, attrition, and burnout.

Discover how communication compliance in Microsoft 365 can help you detect harassing or threatening language and take action to protect your employees.

Natalia: How widespread of a problem is online harassment?

Leigh: Online harassment is a significant phenomenon. In 2020, 41 percent of Americans experienced it and 28 percent experienced the more severe kinds, like threats of violence, stalking, sexual harassment, and persistent harassment, according to the Pew Online Harassment Update1. That’s a huge number of people experiencing these issues. It has made us prioritize motivating people to improve their security hygiene around personal accounts.

Your employees’ personal accounts are part of the attack surface of the company. Social engineering attacks are when cybercriminals use psychological manipulation on their targets. If someone is being extorted based on their personal life, it has the potential to impact the company. In a classic CEO scam, somebody breaks into an executive’s personal email account, emails a person in accounting posing as the executive, and asks them to send a wire transfer to a bank account controlled by the scammer.

Natalia: What are recent trends in online harassment?

Leigh: According to the most recent Pew study, online harassment went up. Project Include just published a study2 on the internal company harassment landscape during COVID-19, and there has been a sharp uptick in workplace harassment.

Even though the numbers are stable in terms of how many people are experiencing online harassment, before COVID-19, if you were dealing with harassment from outside the company in the course of your work, you still got to go home and have that mental separation. When people work remotely, it’s a different experience, and it feels a lot more personal and vulnerable for those dealing with this kind of harassment.

Natalia: What should organizations understand about online harassment?

Leigh: It’s clear under US and Canadian law that organizations have a duty to ensure that employees don’t harass each other within the organization. When harassment in the workplace comes from outside the company, such as internet harassment, there isn’t a ton of clarity. I think it’s important to make sure that employees have clear policies and internal recourse.

In a typical harassment scenario, an employee says something controversial on Twitter, and people try to get them fired from their company. Sometimes, the things that people say that get them fired are racist or homophobic or biased in some way. When people talk about cancel culture, they are typically talking about consequences. You say something, and you get held to that word.

However, it’s hard to arbitrate. Is the controversial statement fireable, or is it controversial because they are members of an underrepresented group and are being targeted for standing up for themselves? That’s one of the lenses I use to unpack these situations.

Natalia: How can online harassment lead to hacking?

Leigh: After abuse on social channels and unwanted emails, online harassment sometimes gets more aggressive. You see password reset attempts that you have not requested. The next level is credential stuffing, where an attacker obtains a person’s email and password combo from old breaches and tries the credentials on different accounts. Another potential escalation is SIM swapping, which involves the attacker impersonating the victim to a phone company and porting their phone number away to a fresh SIM card. This attack usually targets folks who are high profile and is less common in stalking situations.

Natalia: What does the incident response process look like when an employee is under attack?

Leigh: When dealing with an urgent incident in a workplace, such as somebody hacking into a printer at a branch office, there are known playbooks for responding to different attacks. Likewise, we have different playbooks based on the type of harassment situation an employee is dealing with, for example, harassment by an ex-employee or an employee being targeted due to a company policy decision.

We also pay a lot of attention to the adversaries. We’ll typically make sure the person has safe devices and ensure the adversary does not have access to their personal accounts. We’ll walk them through changing relevant passwords and checking authorized applications. From there, it’s about making sure that the person is OK, and that includes making sure they know about internal resources like an employee assistance program for counseling services.

Natalia: What are the best practices a company can institute to mitigate online harassment or assist those impacted by it?

Leigh: First, have clear internal policies and escalation points around acceptable social media use. There are some industries where it’s understandable that you don’t want employees having a social media presence, but those are rare these days. In general, it’s not realistic to tell employees not to exist online in public, so what’s important is to make boundaries, expectations, and guardrails clear via a written social media policy. Employees want to have long-lived careers and build their personal brands—trying to shut that down wholesale will end up with unfair enforcement and isn’t realistic.

The second best practice is to make sure people have tools and resources available to secure their personal lives, whether it’s a hardware security key such as a Yubikey or a quality password manager. All those day-to-day tools are as important in the workplace as they are in people’s personal lives. Online harassment training teaches employees how to keep attackers out of their personal accounts such as email, bank accounts, and social media. It can be overwhelming trying to understand all the information available about staying safe online. And there’s an argument to be made that you shouldn’t have to become an expert on personal cybersecurity to be able to live your life with an internet presence in the modern world.

The third one would be to ensure there are available resources within the organization that are clear and accessible, so it’s understood where the escalation paths are—whether it’s providing training to management and having management communicate to frontline staff or using internal communications tools to inform employees of resources.

Helping employees improve their personal cybersecurity can help them feel confident that their personal digital infrastructure is secure and helps ensure that online harassment isn’t going to escalate to an incident like an account takeover.

Learn more

Learn how communication compliance in Microsoft 365 can help you detect harassing or threatening language and take action to foster a culture of safety.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

 


1The State of Online Harassment, Emily A. Vogels, Pew Research Center, 13 January 2021.

2Remote work since COVID-19 is exacerbating harm: What companies need to know and do, Yang Hong, McKensie Mack, Ellen Pao, Caroline Sinders, Project Include, March 2021.

The post Cybersecurity’s next fight: How to protect employees from online harassment appeared first on Microsoft Security Blog.

How security can keep media and sources safe

August 10th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Runa Sandvik, an expert on journalistic security and the former Senior Director of Information Security at The New York Times. In this blog, Runa introduces the unique challenges and fundamentals of journalistic security.

Natalia: What is journalistic security? 

Runa: Being a reporter is not a 9-to-5 job. You’re not just a reporter when you step through the doors of The Washington Post or The Wall Street Journal or CNN. It becomes something that you do before work, at the office, at home, or after work at the bar. In some ways, you’re always on the job, so securing a journalist is about securing their life and identity. You’re not just securing the accounts and the systems that they’re using at work, which would fall under the enterprise; you’re securing the accounts and the systems that they use on a personal basis.

In addition, reporters travel. They cover protests and war zones. You will have to account for their physical and emotional safety. Journalistic security for me is effectively the umbrella term for digital security, physical security, and emotional safety.

Natalia: What is unique about securing a media organization?  

Runa: A media organization, whether it’s a smaller nonprofit newsroom or a larger enterprise, needs the same type of security tools and processes as any other organization. However, with a media organization, you must consider the impact. We’re not just talking about data belonging to the enterprise being encrypted or stolen and dumped online; we’re also talking about data from subscribers, readers, and sources. As a result, the potential ramifications of an attack against a media organization—whether it’s a targeted attack, like a nation-state actor looking for the sources of a story, or opportunistic ransomware—can be greater and involve far more people in a more sensitive context. Privacy-preserving monitoring is also important for newsrooms. I believe in helping the journalist understand what’s happening on their devices. If we aren’t teaching them to threat model and think about the digital security risks of their stories and communications with sources, we’re going to have a gap.

The other major difference is the pace. Newsrooms are incredibly deadline-driven, and security’s job is to enable journalists to do their job safely, not block their work. If a journalist tells their security team that they’re going to North Korea and need to secure setup, the team needs to shift their to-do list around to accommodate that—whether it means providing training or new hardware.

Natalia: What’s the biggest challenge to securing a media organization? 

Runa: The one thing that continues to be a challenge for media organizations is the lack of trust and collaboration between the internal IT and security teams and the newsroom. The newsroom doesn’t necessarily trust or go to those departments for help or tools to secure reporters, their material, and their work. If you’re building a defensive posture, you can’t secure what you don’t understand. If you don’t have a good relationship with the newsroom or know what kind of work they do, you’re going to have gaps. I’ve found it helpful to involve the newsroom when making decisions around tools and processes that impact their work. Involving the newsroom in discussions that affect it, even if they’re technical, will do a lot to build a trusting relationship.

Natalia: How do you build a process to evaluate and mitigate risk?  

Runa: If you’re writing about the best chocolate chip cookies, you’re probably fine. You’re probably not going to run into any issues with sources or harassment. If you decide to report on politics though, chances are you’ll face the risk of online threats and harassment that could escalate to physical threats and harassment. The context for a specific project and story becomes a set of risks that need to be accounted for.

Typically, the physical risk assessment process has already been established. Newsrooms have been sending reporters on risky assignments, such as to war zones, for a long time. In most newsrooms, a reporter will talk to the editor and assess the risk of any work-related travel. They get input from their physical security adviser, legal, and HR.

Building a similar process for the digital space becomes a challenge of education and awareness. In some cases, newsrooms have established and documented well-functioning processes, and security teams can become part of that decision tree. In other cases, you must start by introducing yourself to the newsroom and making sure people know you’re there to help. I’ve talked with news organizations in the United States, United Kingdom, and Norway that have cross-functional teams with representatives from the newsroom, IT, security, HR, communications, and legal to ensure no stories fall through the cracks.

Natalia: What processes, protocols, or technologies do you use to protect journalists and their investigations?

Runa: In a newsroom, you typically have “desks.” You have the investigations desk. You have style. You have sports. Different desks will have different needs from a technology and education perspective. Whenever I’m talking to a newsroom, I try to first cover security basics. We’re talking passwords, multifactor authentication updates, and phishing. I cover the baseline; then look at the kind of work each desk is doing to drill in more. For investigations, this could involve setting up a tool to receive tips from the public, or air-gapped (offline) machines to securely review information.

For international travel, it could involve establishing an internal process with the IT team so a journalist can quickly request a new laptop or a new phone. In many cases, the tools that end up being used are popular and well-known. The journalist usually must use the same tools as the source.

Making the security team available to the newsroom also goes a long way. Reporters know how to ask questions—whether they’re doing an interview or trying to understand how a password manager works, or how to use a YubiKey. Give them an opportunity to ask questions through an internal chat channel or weekly meetings. It all goes back to relationship building and awareness.

Natalia: How has working in journalistic security shaped your perspective on security? 

Runa: When I first started working for The Tor Project, which develops free and open-source software for online anonymity, I was curious about how it’s possible to use lines of code to achieve that. I didn’t think much about the people who use it or what they use it for. But through that work, I learned a lot about the global impact The Tor Project has: from activists and journalists to security researchers and law enforcement. In interacting with reporters, I had to accept that there’s a difference between the ideal setup from a security standpoint and what’s going to get the job done. It would be great to give everyone a laptop with Tails or Qubes OS configured, but are they going to be able to use it for their work? At what point do we say that we’ve found a happy middle between securing the data or systems, enabling the reporter, and accepting risk?

Natalia: How can we continue to enhance security in the newsroom?  

Runa: We need more of a focus on security attacks that target and impact media organizations and reporters. Typically, when you read information about security attacks, it usually highlights the industries affected. You’ll see references to government, education, and healthcare, but what about media?

If you’re working at a media organization trying to understand what kind of digital threats you’re facing, where do you go to find information? I would love to see an organization or individual build a resource with a timeline of the kind of digital attacks we’ve seen against media organizations in the United States from 2015 to 2021. This would be a way to get a pulse on what’s happening to educate journalists of the risks, identify impact and risk to operations, and inform leadership.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How security can keep media and sources safe appeared first on Microsoft Security Blog.

A guide to balancing external threats and insider risk

July 22nd, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Rockwell Automation Vice President and Chief Information Security Officer Dawn Cappelli. In this blog post, Dawn talks about the importance of including insider risk in your cybersecurity plan. 

Natalia: What is the biggest barrier that organizations face in addressing insider risk?

Dawn: The biggest barrier is drawing attention to insider risk. We heard about the ransomware group bringing down the Colonial Pipeline. We hear about ransomware attacks exposing organizations’ intellectual property (IP). We’re not hearing a lot about insider threats. We haven’t had a big insider threat case since Edward Snowden so that sometimes makes it hard to get buy-in for an insider risk program. But I guarantee insider threats are happening. Intellectual property is being stolen and systems are being sabotaged. The question is whether they are being detected—are companies looking?

Natalia: How do you assess the success of an insider risk program?

Dawn: First, we measure our success by significant cases. For instance, we have someone leaving the company to go to a competitor, we catch them copying confidential information that they clearly want to take with them, and we get it back.

Second, we measure success by looking at the team’s productivity. Everyone in the company has a risk score based on suspicious or anomalous activity as well as contextual data, for instance, they are leaving the company. Every day we start at the top of the dashboard with the highest risk and work our way down. We look at how many cases have no findings because that means we’re wasting time, and we need to adjust our risk models to eliminate false positives.

We also look at the reduction in cases because we focus a lot on deterrence, communication, and awareness, as well as cases by business unit and by region. We run targeted campaigns and training for specific business units or types of employees, regions, or countries, and then look at whether those were effective in reducing the number of cases.

Natalia: How does measuring internal threats differ from measuring external threats?

Dawn: From an external risk perspective, you need to do the same thing—see if your external controls are working and if they’re blocking significant threats. Our Computer Security Incident Response Team (CSIRT) also looks at the time to contain and the time to remediate. We should also measure how long it takes to respond and recover IP taken by insiders.

By the way, I like using the term “insider risk” instead of “insider threat” because we find that most suspicious insider activity we detect and respond to is not intentionally malicious. Especially during COVID-19, we see more employees who are concerned about backing up their computer, so they pull out their personal hard drive and use it to make a backup. They don’t have malicious intent, but we still must remediate the risk. Next week they could be recruited by a competitor, and we can’t take the chance that they happen to have a copy of our confidential information on a removable media device or in personal cloud storage.

Natalia: How do you balance protecting against external threats and managing insider risks?

Dawn: You need to consider both. You should be doing threat modeling for external threats and insider risks and prioritizing your security controls accordingly. An insider can do anything an external attacker can do. There was a case in the media recently where someone tried to bribe an insider to plug in an infected USB drive to get malware onto the company’s network or open an infected attachment in an email to spread the malware. An external attacker can get in and do what they want to do much easier through an insider.

We use the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) for our security program, and we use it to design a holistic security program that encompasses both external and insider security risks. For example, we identify our critical assets and who should have access to them, including insiders and third parties. We protect those assets from unauthorized access—including insiders and outsiders. We detect anomalous or suspicious behavior from insiders and outsiders. We respond to all incidents and recover when necessary. We have different processes, teams, and technologies for our insider risk program, but we also use many of the same tools as the CSIRT, like our Security Information and Event Management (SIEM) and Microsoft Office 365 tools.

Natalia: What best practices would you recommend for data governance and information protection?

Dawn: Don’t think about insider threats only from an IP perspective. There’s also the threat of insider cyber sabotage, which means you need to detect and respond to activities like insiders downloading hacking tools or sabotaging your product source code.

Think about it: an external attacker has to get into the network, figure out where the development environment is, get the access they need to compromise the source code or development environment, plant the malicious code or backdoor into the product—all without being detected. It would be a lot easier for an insider to do that because they know where the development environment is, they have access to it, and they know how the development processes work.

When considering threat types, I wouldn’t say that you need to focus more on cyber sabotage than IP; you need to focus on them equally. The mitigations and detections are different for IP theft versus sabotage. For theft of IP, we’re not looking for people trying to download malware, but for sabotage, we are. The response processes are also different depending on the threat vector.

Natalia: Who needs to be involved in managing and reducing insider risk, and how?

Dawn: You need an owner for your insider risk program, and in my opinion, that should be the Chief Information Security Officer (CISO). HR is a key member of the virtual insider risk team because happy people don’t typically commit sabotage; it’s employees who are angry and upset, and they tend to come to the attention of HR. Every person in Rockwell HR takes mandatory insider risk training every year, so they know the behaviors to look for.

Legal is another critical member of the team. We can’t randomly take people’s computers and do forensics for no good reason, especially in light of all the privacy regulations around the world. The insider risk investigations team is in our legal department and works with legal, HR, and managers. For any case involving personal information and any case in Europe, we go to our Chief Privacy Officer and make sure that we’re adhering to all the privacy laws. In some countries, we also have to go to the Works Council and let them know we’re investigating an employee. The security team is responsible for all the controls—preventive, detective—technology, and risk models.

Natalia: What’s next in the world of data regulation?

Dawn: Privacy is the biggest issue. The Works Councils in Europe are becoming stronger and more diligent. They are protecting the privacy of their fellow employees, and the privacy review processes make the deployment of monitoring technology more challenging.

In the current cyber threat environment, we must figure out how to get security and privacy to work together. My advice to companies operating in Europe is to go to the Works Councils as soon as you’re thinking about purchasing new technology. Make them part of the process and be totally transparent with them. Don’t wait until you’re ready to deploy.

Natalia: How will advancements like cloud computing and AI change the risk landscape?

Dawn: We have a cloud environment, and our employees are using it to develop products. From inception, the insider risk team worked to ensure that we’re always threat modeling the environment. We go through the entire NIST CSF for that cloud environment and look at it from both an external and insider risk perspective.

Companies use empirical, objective data to create and train AI models for their products. The question becomes, “Do you have controls to identify an insider who deliberately wants to bias your models or put something malicious into your AI models to make it go off course later?” With any type of threat, ask if an insider could facilitate this type of attack. An insider can do anything an outsider can do, and they can do it much easier.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post A guide to balancing external threats and insider risk appeared first on Microsoft Security Blog.

How to build a privacy program the right way

July 7th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with attorney Whitney Merrill, an expert on privacy legal issues and Data Protection Officer and Privacy Counsel at Asana. The thoughts below reflect her views, not the views of her employer, and are not legal advice. In this blog, Whitney talks about building a privacy program and offers best practices for privacy training.

Natalia: How do security, privacy, and regulatory compliance intersect?

Whitney: Security and privacy are closely related but not the same. Privacy is not possible without security. In the last 5 to 10 years, regulations in privacy and security have taken very different paths. Most regulations across the world fall to a standard of reasonable security, whereas privacy is much more prescriptive about the types of behaviors or rights that individuals can exercise from a compliance perspective. Companies look to common security frameworks like ISO 27001 or SOC 2, but privacy doesn’t really have that. That’s born from the fact that security feels very black and white. You can secure something, or you can’t.

In privacy, however, there’s a spectrum of beliefs about how data can be used. It’s much more grey. There were attempts in the early 2010s with Do Not Track, the proposed HTTP header field that let internet users opt-out of website tracking. That fell apart. Privacy and regulatory compliance have diverged, and much of it is because of fundamental disagreements between the ad industry and privacy professionals. You see this with cookie banners in the European Union (EU). They’re not a great user experience, and people don’t love interacting with them. They exist because there have been enough regulations like the Electronic Privacy Directive and General Data Protection Regulation (GDPR) that essentially require those types of banners.

Natalia: Who should be involved in privacy, and what role should they play?

Whitney: It’s very important to get privacy buy-in from the highest levels of the company. Not only do you have an obligation under GDPR to have a Data Protection Officer that reports to the highest levels of a company if you’re processing European data, but an open dialogue with leadership about privacy will help establish company cultural values around the processing of data. Are you a company that sells data? How much control will your users and customers have over their data? How granular should those controls be? Do you collect sensitive data (like health or financial data), or is that something that you want to ban on your platform?

The sooner you get buy-in from leadership and the sooner you build privacy into your tools, the easier it’s going to be in the long run. It doesn’t have to be perfect, but a good foundation will be easier to build upon in the future. I’d also love to see the venture capital community incentivizing startups and smaller companies to care about privacy and security as opposed to just focusing on growth. It’s apparent that startups aren’t implementing the privacy lessons learned by other companies that have already seen privacy enforcement from a privacy regulator. As a result, the same privacy issues pop up over and over. Obviously, regulators will play a role. In addition to enforcement, education and guidance from regulators are vital to helping companies build privacy by design into their platforms.

Natalia: What does a privacy attack look like, and which attacks should companies pay attention to?

Whitney: A privacy attack can look very similar to a security attack. A data breach, for instance, is a privacy attack: it leaks confidential information. A European regulator recently called a privacy bug a breach. In this particular case, a bug in the software caused the information to be made public that the user had marked as private. Folks generally associate data breaches with an attacker, but often accidental disclosures or privacy bugs can cause data breaches. I’ve talked with folks who say, “Wow, I never thought of that as a security breach,” which is why it’s important to engage your legal team when major privacy or security issues pop up. You might have regulatory reporting obligations that aren’t immediately apparent. Other privacy attacks aren’t necessarily data breaches. Privacy attacks can also include attempts to deanonymize data sets, or they might be privacy bugs that use or collect data in a way that is unanticipated by the user. You might design a feature to only collect a certain type of data when in reality, it’s collecting much more data than was intended or disclosed in a privacy notice.

On the more adversarial side of privacy attacks, an attacker could try to leverage weaknesses and processes around privacy rights to access personal information or erase somebody’s account. An attacker could use the information they find out about an individual online to try to get more information about that individual via a data subject rights process (like the right to get access to your data under global privacy laws). There were a few cases of this after the GDPR went into effect. An attacker used leaked credentials to a user’s account to download all of the data that the service had about that individual. As such, it’s important to properly verify the individual making the request, and if necessary, build in additional checks to prevent accidental disclosure.

Natalia: How should a company track accidental misuse of someone’s information or preferences?

Whitney: It’s very hard. This is where training, culture, and communication are really important and valuable. Misuse of data is unfortunately common. If a company is collecting personal data for a security feature like multifactor authentication, they should not also use that phone number for marketing and advertising purposes. That goes beyond the original scope and is a misuse of that phone number. To prevent this, you need to think about security controls. Who has access to the data? When do they have access to the data? How do you document and track access to the data? How do you audit those behaviors? That’s where security and privacy deeply overlap because if you get alignment there, it’s going to be a lot easier to manage the misuse of data.

It’s also a good idea to be transparent about incidents when they occur because it builds trust. Of course, companies should work closely with their legal and PR teams when deciding to publicly discuss incidents, but when I see a news article about a company disclosing that they had an incident and then see a detailed breakdown of that incident from the company (how they investigated and fixed the issue), I usually think, “Thanks for telling me. I know you were not necessarily legally required to disclose that. But I trust you more now because I now know that you’re going to let me know the next time something happens, especially something that could be perceived as worse.” Privacy isn’t just about complying with the law. It’s about building trust with your users so they understand what’s happening with their data.

Natalia: What are best practices for implementing a privacy program?

Whitney: When you build a privacy program, look at the culture of the company. What are its values, and how do you link privacy to those values? It’s going to vary from company to company. The values of a company with a business model based on the use or sale of data are going to be different than a company that sells hardware and doesn’t need to collect data as its main source of revenue.

It’s easy for companies to look at new privacy laws–like GDPR and the California Consumer Privacy Act (CCPA)–and say, “Let’s just do that,” without thinking through the broader implications. That’s the wrong approach. Yes, you want to comply with privacy laws, but compliance does not equal security or privacy. If you’re constantly reactive to only what privacy law requires, you’ll tire out quickly because it’s changing and growing rapidly. Privacy is the future. Instead, think more holistically and proactively when it comes to privacy. Instead of rolling out a process to comply with only one region and one law, consider rolling it out for all users in all regions, so when a new region implements a similar law or regulation, you’ll be most of the way there. Just because you’re compliant with GDPR doesn’t mean you’re a privacy-focused company or that you process information in the most privacy-centric way. But you’re moving in that direction, and you can build on that foundation. Another best practice is to find campaigners across the company who support privacy efforts. If you don’t have a dedicated privacy resource, that doesn’t mean you can’t build a culture of privacy within your company. Work with privacy-minded employees to seek out the easy privacy wins, such as making sure your privacy policy is up to date and reflective of your practices. Focus on those to build support around privacy within the company.

Putting my former regulator hat on, privacy culture is important. When the Federal Trade Commission (FTC) comes knocking at your door, they’re looking to see if you have the right intentions and are trying to do your best, not just whether you prescriptively failed to do this one thing that you should have done. They look at the size of the company, and its maturity, resources, and business model in determining how they’ll enforce against that company. Showing that you care, isn’t going to necessarily fix your problems, but it will definitely help.

Natalia: How should companies train employees on privacy issues?

Whitney: Training should happen regularly. However, not all training needs to be really detailed or cover the same material—shake it up. The aim of training employees on privacy issues is to cultivate a culture of privacy. For example, when employees onboard, they’re new and excited about joining a new company. They’re not going to remember everything so keep privacy training high-level. Focus on the cultural side of privacy so they get an idea of how to think about privacy in their role. From there, give them the resources to empower themselves to learn more about privacy (like articles and additional training). Annual training is a good way to remind people of the basics, but there are many people who are going to tune those out, so make them funny and engaging if you can. I love using memes, funny themes, or recent events to help draw the audience in.

As the privacy program matures, I recommend creating a training program that fits each team and their level of data access or most commonly used tools. For example, some customer service teams have access to user data and the ability to help users in a way that other teams may not, so training should be tailored to address their specific personal data access and tooling abilities. They may also be more likely to record calls for quality and training purposes, so training around global call recording laws and requirements may be relevant. The more you target training toward specific tools and use cases, the better it’s going to be because the employee can better understand how that training relates to their everyday work.

Natalia: What encryption strategies can companies implement to strengthen privacy?

Whitney: Encrypt your databases at rest. Encrypt data in transit. It is no longer acceptable to have an S3 bucket or a database that is not encrypted at rest, especially if that system stores personal data. At the moment, enterprise key management (EKM) is a popular data protection feature involving encryption. EKM gives a company the ability to manage the encryption key for the service that they are using. For instance, a company using Microsoft services may want to control that key so that they have ownership over who can access the data, rotate the key, or delete the key so no one can access the data ever again.

The popularity of EKM is driven by trends in security and Schrems II, which was a major decision from the Court of Justice of the European Union last summer. This decision ruled Privacy Shield, the safe harbor for data transfers from the EU to the United States, invalid for not adequately protecting personal data. Subsequently, the European Data Protection Board (EDPB) issued guidance advising data be encrypted before being transferred to help secure personal data when transferred to a region that might present risks. Encryption is vital when talking about and implementing data protection and will continue to be in the future.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How to build a privacy program the right way appeared first on Microsoft Security Blog.

Strategies, tools, and frameworks for building an effective threat intelligence team

June 22nd, 2021 No comments

How to think about building a threat intelligence program

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Red Canary Director of Intelligence Katie Nickels, a certified instructor with the SANS Institute. In this blog, Katie shares strategies, tools, and frameworks for building an effective threat intelligence team.

Natalia: Where should cyber threat intelligence (CTI) teams start?

Katie: Threat intelligence is all about helping organizations make decisions and understand what matters and what doesn’t. Many intelligence teams start with tools or an indicator feed that they don’t really need. My recommendation is to listen to potential consumers of the intel team, understand the problems they are facing, and convert their challenges into requirements. If you have security operations center (SOC) analysts, talk to them about their pain points. They may have a flood of alerts and don’t know which ones are the most important. Talk to systems administrators who don’t know what to do when something big happens. It could be as simple as helping an administrator understand important vulnerabilities.

The intel team can then determine how to achieve those requirements. They may need a way to track tactics, techniques, procedures (TTPs), and threat indicators, so they decide to get a threat intelligence platform. Or maybe they need endpoint collection to understand what adversaries are doing in their networks. They may decide they need a framework or a model to help organize those adversary behaviors. Starting with the requirements and asking what problems the team needs to solve is key to figuring out how to make a big impact.

Also, threat intel analysts must be selfless people. We produce intelligence for others, so setting requirements is more about listening than telling.

Natalia: What should security teams consider when selecting threat intelligence tools?

Katie: I always joke that one of the best CTI tools of all time is a spreadsheet. Of course, spreadsheets have limitations. Many organizations will use a threat intelligence platform, either free, open-source software, like MISP, or a commercial option.

For tooling, CTI analysts need a way to pull on all these threads. I recommend that organizations start with free tools. Twitter is an amazing source of threat intelligence. There are researchers who track malware families like Qbot and get amazing intelligence just by following hashtags on Twitter. There are great free resources, like online sandboxes. VirusTotal has a free version and a paid version.

As teams grow, they may get to a level where they have tried the free tools and are hitting a wall. There are commercial tools that provide a lot of value because they can collect domain information for many years. There are commercial services that let you look at passive Domain Name Server (DNS) information or WHOIS information so you can pivot. This can help teams correlate and build out what they know about threats. Maltego has a free version of a graphing and link analysis tool that can be useful.

Natalia: How should threat intelligence teams select a framework? Which ones should they consider?

Katie: The big three frameworks are the Lockheed Martin Cyber Kill Chain®, the Diamond Model, and MITRE ATT&CK. If there’s a fourth, I would add VERIS, which is the framework that Verizon uses for their annual Data Breach Investigations Report. I often get asked which framework is the best, and my favorite answer as an analyst is always, “It depends on what you’re trying to accomplish.”

The Diamond Model offers an amazing way for analysts to cluster activity together. It’s very simple and covers the four parts of an intrusion event. For example, if we see an adversary today using a specific malware family plus a specific domain pattern, and then we see that combination next week, the Diamond Model can help us realize those look similar. The Kill Chain framework is great for communicating how far an incident has gotten. We just saw reconnaissance or an initial phish, but did the adversary take any actions on objectives? MITRE ATT&CK is really useful if you’re trying to track down to the TTP level. What are the behaviors an adversary is using? You can also incorporate these different frameworks.

Natalia: How do you design a threat model?

Katie: There are very formal software engineering approaches to threat modeling, in which you think of possible threats to software and how to design it securely. My approach is, let’s simplify it. Threat modeling is the intersection of what an organization has that an adversary might target. A customer might say to us, “We’re really worried about the Lazarus Group and North Korean threats.” We’d say, ”You’re a small coffee shop in the middle of the country, and that threat might not be the most important to you based on what we’ve seen this group do in the past. I think a more relevant threat for you is probably ransomware.” Ransomware is far worse than anyone expected. It can affect almost every organization; big and small organizations are affected equally by ransomware.

If teams focus on all threats, they’re going to get burnt out. Instead, ask, “What does our organization have that adversaries might want?” When prioritizing threats, talking to your peers is a great place to start. There’s a wealth of information out there. If you’re a financial company, go talk to other financial companies. One thing I love about this community is that most people, even if they’re competitors, are willing to share. Also, realize that people in security operations, who aren’t necessarily named threat intel analysts, still do intelligence. You don’t have to have a threat intel team to do threat intel.

Natalia: What is the future of threat intelligence?

Katie: Cyber threat intelligence has been around for maybe a few decades, but in the scope of history, that’s a very short time. With frameworks like ATT&CK or the Diamond Model, we’re starting to see a little more formalization. I hope that builds, and there’s more professionalization of the industry with standards for what practices we do and don’t do. For example, if you’re putting out an analysis, here are the things that you should consider. There’s no standard way we communicate except for those few frameworks like ATT&CK. When there are standards, it’s much easier for people to trust what’s coming out of an industry.

My other hope is that we improve the tooling and automation to help support human analysts. I’m often asked, “How can threat intel be automated?” Threat intelligence is fundamentally a human discipline. It requires humans to make sense of complex and disparate information. There’s always going to be a human element of threat intelligence, but I hope we can do better as an industry in figuring out what tools can make analysts powerful and support the decisions that security teams have to make.

Learn more

To learn more about Katie, follow her on @likethecoins, and for more details on Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Strategies, tools, and frameworks for building an effective threat intelligence team appeared first on Microsoft Security Blog.

Strategies, tools, and frameworks for building an effective threat intelligence team

June 22nd, 2021 No comments

How to think about building a threat intelligence program

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Red Canary Director of Intelligence Katie Nickels, a certified instructor with the SANS Institute. In this blog, Katie shares strategies, tools, and frameworks for building an effective threat intelligence team.

Natalia: Where should cyber threat intelligence (CTI) teams start?

Katie: Threat intelligence is all about helping organizations make decisions and understand what matters and what doesn’t. Many intelligence teams start with tools or an indicator feed that they don’t really need. My recommendation is to listen to potential consumers of the intel team, understand the problems they are facing, and convert their challenges into requirements. If you have security operations center (SOC) analysts, talk to them about their pain points. They may have a flood of alerts and don’t know which ones are the most important. Talk to systems administrators who don’t know what to do when something big happens. It could be as simple as helping an administrator understand important vulnerabilities.

The intel team can then determine how to achieve those requirements. They may need a way to track tactics, techniques, procedures (TTPs), and threat indicators, so they decide to get a threat intelligence platform. Or maybe they need endpoint collection to understand what adversaries are doing in their networks. They may decide they need a framework or a model to help organize those adversary behaviors. Starting with the requirements and asking what problems the team needs to solve is key to figuring out how to make a big impact.

Also, threat intel analysts must be selfless people. We produce intelligence for others, so setting requirements is more about listening than telling.

Natalia: What should security teams consider when selecting threat intelligence tools?

Katie: I always joke that one of the best CTI tools of all time is a spreadsheet. Of course, spreadsheets have limitations. Many organizations will use a threat intelligence platform, either free, open-source software, like MISP, or a commercial option.

For tooling, CTI analysts need a way to pull on all these threads. I recommend that organizations start with free tools. Twitter is an amazing source of threat intelligence. There are researchers who track malware families like Qbot and get amazing intelligence just by following hashtags on Twitter. There are great free resources, like online sandboxes. VirusTotal has a free version and a paid version.

As teams grow, they may get to a level where they have tried the free tools and are hitting a wall. There are commercial tools that provide a lot of value because they can collect domain information for many years. There are commercial services that let you look at passive Domain Name Server (DNS) information or WHOIS information so you can pivot. This can help teams correlate and build out what they know about threats. Maltego has a free version of a graphing and link analysis tool that can be useful.

Natalia: How should threat intelligence teams select a framework? Which ones should they consider?

Katie: The big three frameworks are the Lockheed Martin Cyber Kill Chain®, the Diamond Model, and MITRE ATT&CK. If there’s a fourth, I would add VERIS, which is the framework that Verizon uses for their annual Data Breach Investigations Report. I often get asked which framework is the best, and my favorite answer as an analyst is always, “It depends on what you’re trying to accomplish.”

The Diamond Model offers an amazing way for analysts to cluster activity together. It’s very simple and covers the four parts of an intrusion event. For example, if we see an adversary today using a specific malware family plus a specific domain pattern, and then we see that combination next week, the Diamond Model can help us realize those look similar. The Kill Chain framework is great for communicating how far an incident has gotten. We just saw reconnaissance or an initial phish, but did the adversary take any actions on objectives? MITRE ATT&CK is really useful if you’re trying to track down to the TTP level. What are the behaviors an adversary is using? You can also incorporate these different frameworks.

Natalia: How do you design a threat model?

Katie: There are very formal software engineering approaches to threat modeling, in which you think of possible threats to software and how to design it securely. My approach is, let’s simplify it. Threat modeling is the intersection of what an organization has that an adversary might target. A customer might say to us, “We’re really worried about the Lazarus Group and North Korean threats.” We’d say, ”You’re a small coffee shop in the middle of the country, and that threat might not be the most important to you based on what we’ve seen this group do in the past. I think a more relevant threat for you is probably ransomware.” Ransomware is far worse than anyone expected. It can affect almost every organization; big and small organizations are affected equally by ransomware.

If teams focus on all threats, they’re going to get burnt out. Instead, ask, “What does our organization have that adversaries might want?” When prioritizing threats, talking to your peers is a great place to start. There’s a wealth of information out there. If you’re a financial company, go talk to other financial companies. One thing I love about this community is that most people, even if they’re competitors, are willing to share. Also, realize that people in security operations, who aren’t necessarily named threat intel analysts, still do intelligence. You don’t have to have a threat intel team to do threat intel.

Natalia: What is the future of threat intelligence?

Katie: Cyber threat intelligence has been around for maybe a few decades, but in the scope of history, that’s a very short time. With frameworks like ATT&CK or the Diamond Model, we’re starting to see a little more formalization. I hope that builds, and there’s more professionalization of the industry with standards for what practices we do and don’t do. For example, if you’re putting out an analysis, here are the things that you should consider. There’s no standard way we communicate except for those few frameworks like ATT&CK. When there are standards, it’s much easier for people to trust what’s coming out of an industry.

My other hope is that we improve the tooling and automation to help support human analysts. I’m often asked, “How can threat intel be automated?” Threat intelligence is fundamentally a human discipline. It requires humans to make sense of complex and disparate information. There’s always going to be a human element of threat intelligence, but I hope we can do better as an industry in figuring out what tools can make analysts powerful and support the decisions that security teams have to make.

Learn more

To learn more about Katie, follow her on @likethecoins, and for more details on Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Strategies, tools, and frameworks for building an effective threat intelligence team appeared first on Microsoft Security Blog.

How purple teams can embrace hacker culture to improve security

June 10th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Matthew Hickey, co-founder, CEO, and writer for Hacker House. In this blog post, Matthew talks about the benefits of a purple team and offers best practices for building a successful one.

Natalia: What is a purple team, and how does it bridge red and blue teams?

Matthew: The traditional roles involve a blue team that acts as your defenders and a red team that acts as your attackers. The blue team wants to protect the network. The red team works to breach the network. They want to highlight the security shortcomings of the blue team’s defenses. The two teams aren’t always working on the same objective to secure information assets and eliminate information risk, as each is focused on the objective of their respective team—one to prevent breaches, the other to succeed in a breach.

Purple teaming is an amalgamation of the blue and red teams into a single team to provide value to the business. With a successful purple team, two groups of people normally working on opposite ends of the table are collaborating on a unified goal—improving cybersecurity together. It can remove a lot of competitiveness from security testing processes. Purple teams can replace red and blue teams, and they’re more cost-effective for smaller organizations. If you’re a big conglomerate, you might want to consider having a blue team, a red team, and a purple team. Purple teams work on both improving knowledge of the attacks an organization faces and building better defenses to defeat them.

Natalia: Why do companies need purple teams?

Matthew: Computer hacking has become much more accessible. If one clever person on the internet writes and shares an exploit or tool, everyone else can download the program and use it. It doesn’t have a high barrier of entry. There are high school kids exploiting SQL injection attacks and wiping millions from a company valuation. Because hacking information is more widely disseminated, it’s also more accessible to the people defending systems. There have also been significant improvements in how we understand attacker behavior and model those behaviors. The MITRE ATT&CK framework, for instance, is leveraged by most red teams to simulate attackers’ behavior and how they operate.

When red and blue teams work together as a purple team, they can perform assessments in a fashion similar to unit tests against frameworks, like MITRE ATT&CK, and use those insights on attacker behavior to identify gaps in the network and build better defenses around critical assets. Adopting the attackers’ techniques and working with the system to build more comprehensive assessments, you have advantages your attacker does not. Those advantages come from your business intelligence and people.

Natalia: What are the benefits of bringing everything under one team?

Matthew: The benefits of a purple team include speed and cost reduction. Purple teams are typically constructed as an internal resource, which can reduce reaching out to external experts for advice. If they get alerts in their email, purple teams can wade through them and say, “Oh, this is a priority because attackers are going to exploit this quickly since there’s a public exploit code available. We need to fix this.” Unit testing specific attacker behaviors and capabilities against frameworks on an ongoing basis as opposed to performing periodic, full-blown simulated engagements that last several weeks to several months is also a huge time reduction for many companies.

Red teams can often be blindsided by wanting to build the best phishing attack. Blue teams want to make sure their controls are working correctly. They’ll achieve much more in a shorter timeframe as a purple team because they are more transparent with one another, sharing their expertise and understanding of the threats. You’ll still need to occasionally delve into the world of a simulated, scenario-driven exercise where one team is kept in the dark to ensure processes and practices are effective.

Natalia: How do purple teams provide security assurance?

Matthew: Cybersecurity assurance is the process of understanding what the information risk is to a business—its servers, applications, or any supporting IT infrastructure. Assurance work is essentially demonstrating whether a system has a level of security or risk management that is comfortable to an organization. No system in the world is 100 percent infallible. There’s always going to be an attack you weren’t expecting. The assurance process is meant to make attacks more complex and costly for an attacker to pull off. Many attackers are opportunistic and will frequently move onto an easier target when they encounter resistance, and strong resistance comes from purple teams. Purple teams are used to provide a level of assurance that what you’ve built is resilient enough to withstand modern network threats by increasing the visibility and insights shared among typically siloed teams.

Natalia: What are best practices for building a successful purple team?

Matthew: You don’t need to be an expert on zero-day exploitation or the world’s best programmer, but you should have a core competency of cybersecurity and an understanding of foundational basics like how an attacker behaves, how a penetration test is structured, which tools are used for what, and how to review a firewall or event log. Purple teams should be able to review malware and understand its objectives, review exploits to understand their impact, and make use of tools like nmap and mitmproxy to scan for vulnerabilities. They also should understand how to interpret event logs and translate the attack side of hacking into defenses like firewall rules and policy enforcement. People come to me and say, “I didn’t know why we were building firewalls around these critical information assets until I saw somebody exploit a PostgreSQL server and get a root shell on it, and suddenly, it all made sense why I might need to block outgoing internet control message protocol (ICMP).”

Hiring hackers to join your purple team used to be taboo, yet hackers often make excellent defenders. Embrace hacking because it’s a problem-solving mentality. The information is out there, and your attackers already know it. You might as well know it too, so hire hackers. I’ve heard people say hackers are the immune system for the internet when describing how their behavior can be beneficial. Hackers are following what’s going on out there and are going to be the people who see an attack and say, “We use Jenkins for our production build. We better get that patched because this new 9.8 CVSS scoring vulnerability came out two hours ago. Attackers are going to be on this really quickly.” Breaking into computers is done step-by-step, it’s a logical process. Attackers find a weakness in the armor. They find another weakness in the armor. They combine those two. They get access to some source code. They get some credentials from a system. They hop onto the next system. Once you understand the workflow of what your attacker is doing, you get better at knowing which systems will need host intrusion, enhanced monitoring, and the reasons why. Hackers are the ones who have a handle on your risks as an organization and can provide insight as to what threats your teams should be focused on addressing.

Natalia: How should managers support the training and education needs of their purple team?

Matthew: Making sure people have the right training and the right tooling for their job can be hard. You walk through any expo floor, and there are hundreds of boxes with fancy lights and a million product portfolios. You could buy every single box off that expo floor, and none of it’s going to do you any good unless you’ve got the right person operating how that box works and interpreting that data. Your people are more important in some respects than the technology because they’re your eyes and ears on what’s happening on the network. If you’ve got a system that sends 50 high-risk alerts, and no one is picking up and reacting to those alerts, you’ve just got an expensive box with flashing lights.

If you’re hiring someone onto a purple team, make sure they are supported to attend conferences or network with industry peers and invest in their training and education. That is always going to give you better results as they learn and are exposed to more insights, and your people will feel more valued as well. If you want to learn about adversarial behavior and how you can use computer hacking to provide assurance outputs to businesses, read Hands-on Hacking: What can you expect? by Hacker House.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How purple teams can embrace hacker culture to improve security appeared first on Microsoft Security Blog.

How purple teams can embrace hacker culture to improve security

June 10th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Matthew Hickey, co-founder, CEO, and writer for Hacker House. In this blog post, Matthew talks about the benefits of a purple team and offers best practices for building a successful one.

Natalia: What is a purple team, and how does it bridge red and blue teams?

Matthew: The traditional roles involve a blue team that acts as your defenders and a red team that acts as your attackers. The blue team wants to protect the network. The red team works to breach the network. They want to highlight the security shortcomings of the blue team’s defenses. The two teams aren’t always working on the same objective to secure information assets and eliminate information risk, as each is focused on the objective of their respective team—one to prevent breaches, the other to succeed in a breach.

Purple teaming is an amalgamation of the blue and red teams into a single team to provide value to the business. With a successful purple team, two groups of people normally working on opposite ends of the table are collaborating on a unified goal—improving cybersecurity together. It can remove a lot of competitiveness from security testing processes. Purple teams can replace red and blue teams, and they’re more cost-effective for smaller organizations. If you’re a big conglomerate, you might want to consider having a blue team, a red team, and a purple team. Purple teams work on both improving knowledge of the attacks an organization faces and building better defenses to defeat them.

Natalia: Why do companies need purple teams?

Matthew: Computer hacking has become much more accessible. If one clever person on the internet writes and shares an exploit or tool, everyone else can download the program and use it. It doesn’t have a high barrier of entry. There are high school kids exploiting SQL injection attacks and wiping millions from a company valuation. Because hacking information is more widely disseminated, it’s also more accessible to the people defending systems. There have also been significant improvements in how we understand attacker behavior and model those behaviors. The MITRE ATT&CK framework, for instance, is leveraged by most red teams to simulate attackers’ behavior and how they operate.

When red and blue teams work together as a purple team, they can perform assessments in a fashion similar to unit tests against frameworks, like MITRE ATT&CK, and use those insights on attacker behavior to identify gaps in the network and build better defenses around critical assets. Adopting the attackers’ techniques and working with the system to build more comprehensive assessments, you have advantages your attacker does not. Those advantages come from your business intelligence and people.

Natalia: What are the benefits of bringing everything under one team?

Matthew: The benefits of a purple team include speed and cost reduction. Purple teams are typically constructed as an internal resource, which can reduce reaching out to external experts for advice. If they get alerts in their email, purple teams can wade through them and say, “Oh, this is a priority because attackers are going to exploit this quickly since there’s a public exploit code available. We need to fix this.” Unit testing specific attacker behaviors and capabilities against frameworks on an ongoing basis as opposed to performing periodic, full-blown simulated engagements that last several weeks to several months is also a huge time reduction for many companies.

Red teams can often be blindsided by wanting to build the best phishing attack. Blue teams want to make sure their controls are working correctly. They’ll achieve much more in a shorter timeframe as a purple team because they are more transparent with one another, sharing their expertise and understanding of the threats. You’ll still need to occasionally delve into the world of a simulated, scenario-driven exercise where one team is kept in the dark to ensure processes and practices are effective.

Natalia: How do purple teams provide security assurance?

Matthew: Cybersecurity assurance is the process of understanding what the information risk is to a business—its servers, applications, or any supporting IT infrastructure. Assurance work is essentially demonstrating whether a system has a level of security or risk management that is comfortable to an organization. No system in the world is 100 percent infallible. There’s always going to be an attack you weren’t expecting. The assurance process is meant to make attacks more complex and costly for an attacker to pull off. Many attackers are opportunistic and will frequently move onto an easier target when they encounter resistance, and strong resistance comes from purple teams. Purple teams are used to provide a level of assurance that what you’ve built is resilient enough to withstand modern network threats by increasing the visibility and insights shared among typically siloed teams.

Natalia: What are best practices for building a successful purple team?

Matthew: You don’t need to be an expert on zero-day exploitation or the world’s best programmer, but you should have a core competency of cybersecurity and an understanding of foundational basics like how an attacker behaves, how a penetration test is structured, which tools are used for what, and how to review a firewall or event log. Purple teams should be able to review malware and understand its objectives, review exploits to understand their impact, and make use of tools like nmap and mitmproxy to scan for vulnerabilities. They also should understand how to interpret event logs and translate the attack side of hacking into defenses like firewall rules and policy enforcement. People come to me and say, “I didn’t know why we were building firewalls around these critical information assets until I saw somebody exploit a PostgreSQL server and get a root shell on it, and suddenly, it all made sense why I might need to block outgoing internet control message protocol (ICMP).”

Hiring hackers to join your purple team used to be taboo, yet hackers often make excellent defenders. Embrace hacking because it’s a problem-solving mentality. The information is out there, and your attackers already know it. You might as well know it too, so hire hackers. I’ve heard people say hackers are the immune system for the internet when describing how their behavior can be beneficial. Hackers are following what’s going on out there and are going to be the people who see an attack and say, “We use Jenkins for our production build. We better get that patched because this new 9.8 CVSS scoring vulnerability came out two hours ago. Attackers are going to be on this really quickly.” Breaking into computers is done step-by-step, it’s a logical process. Attackers find a weakness in the armor. They find another weakness in the armor. They combine those two. They get access to some source code. They get some credentials from a system. They hop onto the next system. Once you understand the workflow of what your attacker is doing, you get better at knowing which systems will need host intrusion, enhanced monitoring, and the reasons why. Hackers are the ones who have a handle on your risks as an organization and can provide insight as to what threats your teams should be focused on addressing.

Natalia: How should managers support the training and education needs of their purple team?

Matthew: Making sure people have the right training and the right tooling for their job can be hard. You walk through any expo floor, and there are hundreds of boxes with fancy lights and a million product portfolios. You could buy every single box off that expo floor, and none of it’s going to do you any good unless you’ve got the right person operating how that box works and interpreting that data. Your people are more important in some respects than the technology because they’re your eyes and ears on what’s happening on the network. If you’ve got a system that sends 50 high-risk alerts, and no one is picking up and reacting to those alerts, you’ve just got an expensive box with flashing lights.

If you’re hiring someone onto a purple team, make sure they are supported to attend conferences or network with industry peers and invest in their training and education. That is always going to give you better results as they learn and are exposed to more insights, and your people will feel more valued as well.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How purple teams can embrace hacker culture to improve security appeared first on Microsoft Security.

Understanding the threat landscape and risks of OT environments

June 1st, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Chris Sistrunk, Technical Manager in Mandiant’s ICS/OT Consulting practice and former engineer at Entergy, where he was a subject matter expert on transmission and distribution of supervisory control and data acquisition (SCADA) systems. In this blog, Chris introduces operational technology (OT) security and shares the unique challenges and security risks to OT.

Natalia: What’s the difference between OT, industrial control systems (ICS), and supervisory control and data acquisition (SCADA)?

Chris: OT, ICS, and SCADA are terms that describe non-IT digital systems. The main encompassing term is operational technology, or OT, which describes digital systems that interact with physical processes in the real world—such as turbines, mixing tanks, industrial robots, and automated warehouses. If you think about manufacturing, power grids, or oil and gas, OT encompasses the cyber-physical systems (CPS) that monitor and control production—how companies make their money producing things like food, water, pharmaceuticals, chemicals, or tractors.

Industrial control systems, or ICS, is under the umbrella of OT. A control system uses automation to take the human out of the equation. For instance, a car plant might have replaced an assembly line with robots, or a food processing plant replaced manual adjustments of ingredients with specific logic code. Industrial control systems are everywhere—manufacturing, retail distribution centers, water treatment, oil and gas, transportation and mining, as well as building automation (like HVAC, elevators, access control, and CCTV) in hospitals, smart buildings, and datacenters.

Supervisory control and data acquisition, or SCADA, is a specific type of industrial control system that enables organizations to monitor and control OT equipment across a wide geographic area. Power companies, oil and gas pipelines, and water facilities have SCADA systems because they cover a large area.

Natalia: What makes securing OT uniquely challenging?

Chris: Security for IT systems has been around for a long time. In the 1980s, control systems didn’t look like normal computers. They were designed for a specific purpose—to last long and to withstand heat and very cold temperatures in wet or caustic environments. These control systems were not connected to any other networks. IT had security, but it didn’t exist in control systems.

Over the years, control systems have become more connected to IT networks—and sometimes to the internet as well—because upper management wants to get a real-time view of the next day’s production or what the projections are for next week or next month based on historical output. The only way to get that information in real-time is to connect the two systems—IT and OT. If you connect control systems to something that’s eventually connected to the internet—it might have firewalls or it might not. That’s a problem.

If you take an IT security network sensor and put it in a control system, it will only understand what it knows—standard IT protocols like HTTP and FTP. It won’t understand the Siemens S7 protocol or the GE SRTP protocol that are not used in IT systems. You also can’t put antivirus or endpoint detection and response (EDR) agents on most of these systems because they’re not Windows or Linux. They’re often real-time embedded operating systems that may be completely custom, plus they also require fast response times that could be affected by antivirus and EDR operations.

Natalia: What threats are prevalent in OT environments?

Chris: We have seen five publicly known cyberattacks against control systems, including Stuxnet, the power grid cyberattacks on Ukraine in 2015 and 2016, and the 2017 Triton attack on safety control systems in a petrochemical facility.

Insider threats are also something to pay attention to. The first publicly known attack on a control system was in the late 1990s in Australia. A fired employee still had access to the equipment and caused a sewage spill. Several years ago, someone was fired at a paper mill in Louisiana, but no one removed his remote access. He logged in and shut down the plant. They knew exactly who it was so the FBI got him, but it cost them about three days of downtime, which likely cost them millions of dollars.

Besides security threats, there’s the risk of an honest mistake. Someone is making a change at 5 PM on a Friday that they didn’t test out, and it causes a network outage, and people have to work over the weekend to fix it. Not having a good change management procedure, standard operating procedures, or rollback plan can cost millions of dollars.

Natalia: What do you think about the incident on February 5, 2021, when a hacker gained access to the water treatment system of Oldsmar, Florida?

Chris: Many water and wastewater companies are just beginning their security journey. They don’t have a large budget and may have only one or two IT folks—notice I didn’t say IT security folks—and they have to wear multiple hats. In the case of the Florida attack, I’m not surprised because most don’t have security standards like active monitoring and ensuring secure access via VPN and multifactor authentication for employees and contractors. They’re not regulated to have strong cybersecurity controls and don’t experience many attacks.

Just because someone can change something on a screen to be 100 times the original value doesn’t mean it physically can change. When you change a chemical in a water system, it is not going to instantaneously change, and it may not even be physically possible to change to that amount. Water and wastewater facilities manually take multiple samples every day so they would have caught any changes before it affected water utility customers.

Natalia: Are contractors a potential attack vector for OT?

Chris: In this case too, it’s usually a byproduct of shadow IT, where OT personnel provide remote access to contractors without going through IT to do it in a secure way using VPN, multifactor authentication, and rotating passwords. You need to provide contractors with visibility and access to the OT network for ongoing maintenance and monitoring, and there are not too many of you. Your contractors are also probably not required to have security training.

In the early 2000s, we had remote access to substations. If you knew something was wrong, you could dial in and look, and then go back to what you were doing. But if something is on the internet, opportunistic threat groups and malicious cyber criminals are going to poke around and be able to do stuff. Organizations should be concerned and look at their security, including who has remote access.

Natalia: Are you seeing more ransomware attacks impacting OT?

Chris: We are. Ransomware is terrible, and it’s affecting hospitals, which have control systems, power plants, and water facilities because they can’t rely on the city water if it goes out. They also have life support systems, imaging, and surgery support. Ransomware has also affected oil and gas companies and power companies on the IT side.

A lot of the attacks were more effective because the organizations didn’t have any segmentation between control systems and the IT network. If you’re using the open platform communications (OPC) protocol, the old version requires 64,000 TCP ports to be open, which includes ports 3389 and VNC 5900. As a result, you don’t have a firewall between IT and OT.

There must be intentionally engineered design to help protect control systems because if you don’t, you leave yourself open to something that doesn’t care what you are.

Learn more

To learn more about IoT and Microsoft Security read our IoT security blog series.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Understanding the threat landscape and risks of OT environments appeared first on Microsoft Security.

Mitigate OT security threats with these best practices

May 18th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Chris Sistrunk, Technical Manager in Mandiant’s ICS/OT Consulting practice and former engineer at Entergy, where he was a subject matter expert on transmission and distribution of supervisory control and data acquisition (SCADA) systems. In this blog, Chris shares best practices to help mitigate the security threats to operational technology (OT) environments.

Natalia: What tools do you use to monitor and govern your OT environment?

Chris: First, you can use the control system itself, which already offers some level of visibility into what’s happening. It looks like NASA control. Operators sit and watch the process all day. You can see what looks normal and what doesn’t look normal.

What’s new is not just looking at the system itself but at OT network security. Especially in the last five or six years, the focus has been on getting network visibility sensors into the control network. There are several vendors, like MODBUS, Siemens S7, and DNP3, out there that understand the protocols and have developed sensors that are purpose-built to analyze OT network traffic rather than IT traffic.

With a newer control system, it’s much easier. Many times, they’ll use virtual machines to manage OT, so you can put agents in those areas. If it’s a Windows 10 or Windows 7 environment, you can even use Microsoft Defender Antivirus and collect the Windows event logs and switch logs. If you don’t look at the logs, you’re not going to know what’s there, so you need to monitor behavior at the network layer using technologies like deep packet inspection (DPI) to identify compromised devices.

Natalia: What are some best practices for securing remote access to the OT network?

Chris: Number one, if you don’t need it at all, don’t have it. That’s the most secure option.

Number two, if you have to have it, make sure it’s engineered for why it’s needed and tightly control who can use it. It’s also important to make sure it’s monitored and protected with multifactor authentication (MFA) unless it’s just for read-only access to the control network, in which case it’s less of a risk. A lot of times, these OT equipment vendors require in their warranty contracts that they have remote access with full control and the ability to change configurations, which means you’ve given someone a high level of privileged access to your control systems.

Number three, have a process and procedure for when that remote access is used and when it’s turned off. You should at least know who was there and for how long, and who did what, using audit logs, for example.

I want to highlight that the Water ISAC, the international security network created for the water and wastewater sector, published a free document called 15 Cybersecurity Fundamentals for Water and Wastewater Utilities. It’s a reminder to consider where remote access is coming from.

Natalia: What percentage of organizations are continuously monitoring their OT networks?

Chris: Today, it’s the exception, not the rule. The only ones monitoring are the ones that have to do it, such as nuclear companies, and the 3,000 or so largest electric utilities that are under North American Electric Reliability Corporation Critical Infrastructure Protection Standards (NERC CIP) regulation, as well as any companies that might have been attacked in the past. But even NERC CIP doesn’t require continuous network security monitoring, just monitoring event logs in a SIEM, for example, which means you can still miss stuff.

So percentage-wise, it’s not very many, especially in non-regulated sectors like manufacturing, pharmaceuticals, chemicals, oil and gas, mining, and warehousing and logistics.

Companies don’t like to spend money on security if they don’t have to. Unfortunately, it’s going to take an attack. We didn’t have electric reliability standards until we had two Northeast blackouts that affected millions of people in 1965 and in August 2003. After that, they said, “Oh, we should probably have some electric reliability standards.” When I started at the power company, one of the lineman safety instructors said, “Safety rules are written in blood.” The only reason why we have reliability rules is because we’ve had darkness.

Natalia: How can teams break down IT and OT silos?

Chris: Communication. It’s the only thing you can do. If you’re in IT, go take a box of doughnuts down to the operators and ask, “What are the pain points here? How can I learn more about what you do so I can understand and so you won’t slap my hand every time I say, ‘Please patch.’” They will be overjoyed that someone came and visited them to learn about what they do.

Generally, if an IT guy with a white hard hat that has never had a scratch on it comes in, operators think, “Don’t touch anything.” But if you build that trust and communication, that strengthens an organization, and you can start training and knowledge sharing.

Natalia: What should roles and responsibilities look like?

Chris: Now, anything that’s on a network, even in the control system environment, can report up through the chief information officer (CIO) or chief information security officer (CISO). Even in power companies, they’re putting everyone, even the folks who do SCADA for the power grid, under the CIO or CISO instead of under operations. At smaller companies, like water and wastewater, it’s still the old situation, where you have an IT guy and an OT engineer or operator. At larger companies, OT is coming through the IT organization under the CIO or IT is under the CIO and operations is still under operations, and the link is under the CISO. You might have security people in IT and security people in OT.

If you’re wondering whether the CISO should be responsible for both IT and OT security, it’s a simple answer. You can’t have enterprise-wide security unless you include OT. Security needs to be applied to it all, but go to a provider that says they provide enterprise-wide security and ask, “Do you know anything about OT networks in power plants?” “Nope.” OK, then, you don’t do enterprise-wide security. You’re not protecting what makes money.

Natalia: Should companies unify IT and OT security in the security operations center (SOC)?

Chris: I’ve seen it implemented as one unified SOC, but I’ve also seen two separate ones because if they have physically separate systems, they have to have physically separate SIEMs. For instance, a nuclear plant will have its own SOC, and corporate will have its own SOC. If a power company has a nuclear power plant, that plant will have its own SOC because it’s air-gapped and not connected to the outside world or the IT network. But if you have an oil and gas environment, it may have both combined into one.

There are pros and cons. If you have the money and the budget and the people, you can do it either way. Just put your people in a room, give them a lunch of pizza, and let them come up with the best solution. There are advantages of having a unified SOC. You don’t even need an OT-specific SOC analyst. Just have a good IT security person learn from the control engineers or operators, and then create those alerts, and do hunting, tool tuning, and rule tuning.

Natalia: What would you say to a board of directors to get them to prioritize OT security?

Chris: I’d keep it short and sweet: “What would happen if you couldn’t make hammers anymore?” If the CISO can’t answer that question, you know the person needs to gain that awareness. Do we have visibility of the network? Do we have offsite backups for our control systems? Do we have security awareness training?

Board members are not concerned with the latest and greatest advanced persistent threat (ATP), but they do care about risk to the business. They’ll say, “We don’t have any security because we don’t have enough people. If we don’t have security implemented, we have a small risk of having downtime.” If you talk to any manager, they’ll know exactly how much money they lose per day if production goes down. We look at business risk in terms of the equation: risk equals impact times probability. Since we don’t have enough data about cyberattacks in OT to have a probability, we tie cybersecurity to the risk register and substitute probability with exploitability. How easy is it to exploit? Can a script kiddie do it? Could my 13-year-old son do it?

If you’ve got an operating system exposed to the Internet, discoverable via Shodan, it is exploitable within minutes. What is the impact of that? If it’s in a chemical, pharmaceutical, food factory, or refinery, that’s a problem not just for downtime but more importantly because it could cause a safety or environmental incident. If it’s a temperature gauge, that’s much less risk. Companies will have a risk register for everything else, including natural disasters. They should have one for OT cybersecurity risk too.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Mitigate OT security threats with these best practices appeared first on Microsoft Security.

Evolving beyond password complexity as an identity strategy

April 22nd, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Troy Hunt, founder of Have I Been Pwned, information security author, and instructor at Pluralsight. In this blog, Troy talks about the future of identity and shares strategies for protecting identities.

Natalia: What threats will be the most important to focus on in the next year?

Troy: We’re seeing more one-time password phishing. This is the value proposition of something like U2F, but how do we make phish-resilient authentication mechanisms? The other thing that’s particularly concerning is the rate of SIM card hijacking. It concerns me greatly that it seems to be so prevalent and that it’s so easy, almost by design, to port a SIM from one location to another. As an industry, we need to say, “Where is the level of identity assurance for a phone number?” Is it very weak or is it very strong, in which case telecommunications companies need legislation to change the ease with which stuff gets ported? Unless we can get people on the same page, we’re going to keep having these problems.

Natalia: What should IT professionals prioritize?

Troy: I would really like IT professionals to better understand the way humans interact with systems. Everyone says, “Just force people to use two-factor authentication.” Do you still want customers? I think every IT professional should have to go through two-factor authentication enrollment with my parents. Everyone should have to learn what it’s like to take non-technical people and try and get some of these things working for them. We can’t just look at these things in a vacuum.

I think U2F is a brilliant technical solution, but it is such an inherently human-flawed mechanism for many reasons. I have enough trouble trying to get my parents to use SMS-based two-factor authentication. Imagine if I had to tell my parents, “You’ve now got this little USB-looking thing, and you need to always have it with you in case you need to log into your device.” We have so many good technical solutions that come at the cost of being usable for most humans, myself included on many occasions.

I’d like us to have a much better understanding of that, which also speaks to solutions like passwordless authentication. We need to give more credit to what passwords in the traditional sense do extremely well. The thing that passwords do better than just about everything else is that everyone knows how to use them. It’s like using your date of birth for knowledge-based authentication. It sucks, but every single person knows how to use it, and that makes a really big difference.

Natalia: What’s the use case for password managers?

Troy: Password managers are a way of storing one-time passcodes (OTPs), but it’s important to recognize that password managers are not just for passwords. I have my credit card details in there, and every time I go to pay at a store, I do the control backslash and automatically fill in the credit card details. I have other secrets in there, like my driver’s license and other data. In many ways, passwords are just one part of the password manager solution, but certainly, for the foreseeable future, we’re going to have passwords so there’s a strong use case for password managers.

Another use case is a family account. If my partner wants to log into our Netflix account, she has her own identity, but there’s one set of credentials. She asks, “Hey Troy, what’s the password for the Netflix account?” It’s a string of gobbledygook. How am I going to get her the password? Do I message it to her, because then it’s in the thread in my unencrypted SMS? But if you have a password manager where you have shared vaults, you can just drop it in the shared vault. That’s another good example of where a password manager is more than just me trying to remember my secrets.

Natalia: Since we’re likely to continue to use passwords, what controls should we put in place to protect them?

Troy: Ultimately, this password is the key to your identity. We’ve had passwords on computer systems for about 60 years and the era in which they were born was so simple. It was before the internet and before social media and before all these other ways we can lose or disclose them. Over time, we started saying, “Let’s have password complexity rules. More entropy. More entropy equals stronger.”

When I used to be able to travel and speak to an audience, I’d talk about passwords and password complexity. I’d say, “Imagine you want to have a password that is the word “password”, and a website says you have to have at least one uppercase character. What do you do? You capitalize the first letter.” Everyone in the audience is laughing nervously and looking at me like, “Oh, you figured it out?” I’d tell them, “You have to have a number. What do you do? You put a one at the end.” And there’s the same nervous laughter. There is this human side that works in complete parallel to the whole mathematics of entropy and having more character types and longer passwords.

As we’ve progressed, we’ve started to recognize that arbitrary password composition criteria is not a very good thing to do, and we’re looking at whether we can have lists of banned passwords, like passwords from previous data breach corpuses. Are you using a password that is already out there floating around in data breaches? Maybe we will get to a time where this won’t be necessary because we will be truly passwordless. In the interim, I think that having a better understanding of what makes a bad password is important and educating users on this first and foremost.

Learn more

To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Evolving beyond password complexity as an identity strategy appeared first on Microsoft Security.

How far have we come? The evolution of securing identities

April 13th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Troy Hunt, founder of Have I Been Pwned, information security author, and instructor at Pluralsight. In this blog, Troy shares his insights on the evolution of identity, from the biggest gaps in identity to modern technology solutions. 

Natalia: How has identity evolved over the past 10 years?

Troy: There is so much identity-related data about other people accessible to everyone that the whole premise of having confidence in identity has fundamentally changed. A few years ago, I was invited to testify in Congress about how knowledge-based authentication has been impacted by data breaches. The example I gave was that my father called a telecommunications company to shift his broadband plan to another tier. He told them his name and they asked for his date of birth, as if that’s a secret.

The biggest shift is with this premise that identity can somehow be assured based on knowledge-based authentication like date of birth, mother’s maiden name, where you went to school, or, in the United States, Social Security number. That idea is fundamentally flawed and is a big area of identity that needs to change. There are so many services that have absolutely no reason to have your date of birth but do.

Natalia: What are the current gaps in identity solutions?

Troy: The traditional approach in the United States, where someone says, “Just give us your Social Security number and then we’ll know it’s you and it will be fine,” has always been inherently flawed, but it’s even more flawed now.

The bigger concern is what if other people try to prove my identity? I’m concerned about SIM swapping because there’s so much identity assurance that is done via SMS. The telecommunications companies will say, “You shouldn’t be doing that. You can’t be confident the person who owns the number is the right person.” And the banks will say, “This is kind of all we have.” I asked my telco, “Can we put a lock on my SIM so the only way someone can migrate my SIM is if they come into your office and prove identity with a passport or driver’s license?” That would eliminate a lot of the problems. They said, “We can’t do that because it would be anticompetitive. Government legislation says we need to make it easy for people to transfer their number to another provider so that people have freedom of choice. Otherwise, they’re locked into the provider.” And I responded with, “The outcome of that—depending on the platform I’m using—could be that someone gets into a really important account of mine.” Their response: I shouldn’t have been using SIMs as a means of identity verification.

Natalia: How can organizations mitigate identity risk?

Troy: For many organizations, there hasn’t been a lot of forethought around what happens when incidents impact identity. One example is breach preparedness. For many years, many organizations would do disaster recovery planning—the annual entire-site-has-gone-down exercise. I rarely see them drill into the impact of a data breach. Organizations rarely dry run what happens when information is leaked that may enable others to take on identities.

One organization that had a data breach and did exceptionally well with disclosure was Imgur. Within 24 hours, they had all the right messaging sent to everyone and cycled passwords. I asked the Chief Technical Officer, “How did you do this so quickly?” And he said, “We plan for it. We had literally written the procedures for how we would deal with an incident like this.” That preparedness is often what’s lacking in organizations today.

Natalia: What’s the biggest difference between enterprise and consumer identity technologies?

Troy: With internal, enterprise-facing identity, these individuals work for your organization and are probably on the payroll. You can make them do things that you can’t ask customers to do. Universal 2nd Factor (U2F) is a great example. You can ship U2F to everyone in your organization because you’re paying them. Plus, you can train internal staff and bring them up to speed with how to use these technologies. We have a lot more control in the internal organization.

Consumers are much harder. They are more likely to just jump ship if they don’t like something. Adoption rates of technologies, like multifactor authentication, are extremely low in consumer land because people don’t know what it is or the value proposition. We also see organizations reticent to push it. A few years ago, a client had a 1 percent adoption rate of two-factor authentication. I asked, “Why don’t you push it harder?” They said that every time they have more people using two-factor authentication, there are more people who get a new phone and don’t migrate their soft token or save the recovery codes. Then, they call them up and say, “I have my username or password but not my one-time password. Can you please let me in?” And they have to go through this big spiral—how do we do identity verification without the thing that we set up to do identity verification in the first place?

Natalia: What should you consider when building systems and policies for consumers to balance user experience and security?

Troy: One big question is: What is the impact of account takeover? For something like Dropbox, the impact of account takeover is massive because you put a lot of important stuff in your Dropbox. If it’s a forum community like catforum.com, the impact of account takeover is minimal.

I’d also think about demographics. Dropbox has enormously wide adoption. My parents use Dropbox and they’re not particularly tech-savvy. If we’re talking about Stack Overflow, we’ve got a very tech-savvy incumbent audience. We can push harder on asking people to do things differently from what they might be used to, which is usually just a username and a password.

Another question is: Is it worth spending money on a per individual basis? My partner, who’s Norwegian, can log on to her Norwegian bank using a physical token. The physical token is not just an upfront cost for every customer but there’s also a maintenance cost. You’re going to have to cycle them every now and then, and people lose them. And you need to support that. But it’s a bank so they can afford to make that investment.

Natalia: What’s your advice on securing identities across your employees, partners, and customers?

Troy: I recommend some form of strong authentication in which you have confidence that a username and a password alone are not treated as identity. That worries me, particularly given there’s so much credential stuffing, and there are billions of credential pairs in lists. There’s also the big question: How did we establish identity in the first place? Whether it be identity theft or impersonation or even sock puppet accounts, how confident do we need to be in the identity at the point of registration, and then subsequently at the point of reauthentication? That will drive discussions around what level of identity documentation we need. But again, we come back to the fact that we don’t have a consistent mechanism in the industry, or in even in one single geography, to offer high assurance of identity at the time of registration.

Natalia: Passwordless is a huge buzzword. A lot of people think of it as a solution to many of our identity problems. What’s your perspective?

Troy: I first started doing interviews a decade ago and people would ask, “When are we going to get rid of passwords? Are we still going to have passwords 10 years from now?” Well, we’ve got more passwords than ever, and I think in 10 years, we will have more passwords. Even as we get passwordless solutions, the other passwords don’t go away.

I have a modern iPhone, and it has Face ID. The value proposition of Face ID is that you don’t need a password. You are passwordless to authenticate your device. When the phone came, I took it out of the box and had to get on the network. What’s the network password? I’ve got no idea, so I go to 1Password and pull it out. So, there’s one password. Then, the phone asks: Would you like to restore from iCloud? What’s your iCloud password? We’re two passwords in now. Would you like to use Face ID? Yes, because I want to go passwordless. That’s cool but you’ve got to have a password as a fallback position. Now, we’re three passwords in to go passwordless. Passwordless doesn’t necessarily mean we kill passwords altogether but that we change the prevalence with which we use them.

Keep an eye out for the second part of the interview where Troy Hunt shares best practices on how to secure identities in today’s world.

Learn more

To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How far have we come? The evolution of securing identities appeared first on Microsoft Security.

How to build a successful application security program

March 29th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Tanya Janca, Founder of We Hack Purple Academy and author of the best-selling book “Alice and Bob Learn Application Security.” Previously, Tanya shared her perspectives on the role of application security (AppSec) and the challenges facing AppSec professionals. In this blog, Tanya shares how to build an AppSec program, find security champions, and measure its success.

Natalia: When you’re building an AppSec program, what are the objectives and requirements?

Tanya: This is sort of a trick question because the way I do it is based on what’s already there and what they want to achieve. For Canada, I did antiterrorism activities, and you better believe that was the strictest security program that any human has ever seen. If I’m working with a company that sells scented soap on the internet, the level of security that they require is very different, their budget is different, and the importance of what they’re protecting is different. I try to figure out what the company’s risks are and what their tolerance is for change. For instance, I’ve been called into a lot of banks and they want the security to be tight, but they’re change-adverse. I find out what matters to them and try to bring their eyes to what should matter to them.

I also usually ask for all scan results. Even if they have almost no AppSec program, usually people have been doing scanning or they’ve had a penetration test. I look at all of it and I look at the top three things and I say, “OK, let’s just obliterate those top three things,” because quite often the top two or three are 40 to 60 percent of their vulnerabilities. First, I stop all the bleeding, and then I create processes and security awareness for developers. We’re going to have a secure coding day and deep dive into each one of these things. I’m going to spend quality time with the people who review all the pull requests so they can look for the top three and start setting specific, measurable goals.

It’s really important to get the developers to help you. When you have a secure coding training, a bunch of developers will self-identify as the security developer. There will be one person who asks multiple questions. We’re going to get that person’s email. They’re our new friend. We’re going to buy that person some books and encourage open communication because that person is going to be our security champion. Eventually, many of my clients start security champion programs and that’s even better because then you have a team of developers—hopefully one per team—that are helping you bring things to their team’s attention.

Natalia: What are some of the key performance indicators (KPIs) for measuring security posture?

Tanya: As application security professionals, we want to minimize the risk of scary apps and then try to bring everything across the board up to a higher security posture. Each organization sets that differently. For an application security program, I would measure that every app receives security attention in every phase of the software development life cycle. For a program, I take inventory of all their apps and APIs. Inventories are a difficult problem in application security; it’s the toughest problem that our field has not solved.

Once you have an inventory, you want to figure out if you can do a quick dynamic application security testing (DAST) scan on everything. You will see it light up like a Christmas tree on some, and on others, it found a couple of lows. It’s not perfect, but it’s what you can do in 30 days. You can scan a whole bunch of things quickly and see OK, so these things are terrifying, these things look OK. Now, let’s concentrate on the terrifying things and make them a little less scary.

Natalia: Do you have any best practices for threat modeling cloud security?

Tanya: For threat modeling generally, I introduce it as a hangout session with a security person and try not to be too formal the first time, because developers usually think, “What is she doing here? Danger, Will Robinson, danger. The security person wants to spend time with us. What have we done wrong?” I say, “I wanted to talk about your app and see if there’s any helpful advice I can offer.” Then, I start asking questions like, “If you were going to hack your app, how would you do it?”

I like the STRIDE methodology, where each of the letters represents a different thing that you need to worry about happening to your apps. Specifically, spoofing, tampering, repudiation, information disclosure, denial of service (DOS), and elevation of privilege. Could someone pretend to be someone else? Could someone pretend to be you? I go through it slowly in a conversational manner because that app is their baby, and I don’t want them to feel like I’m attacking their baby. Eventually, I teach them STRIDE so they can think about these things. Then, we come up with a plan and I say, “OK, I’m going to write up these notes and email them to you.” Writing the notes means you can assign tasks to people.

With threat modeling in the cloud, you must ask more questions, especially if your organization has had previous problems. You want to ask about those because there will be patterns. The biggest issue with the cloud is that we didn’t give them enough education. When we’re bringing them to the cloud, we need to teach them what we expect from them, and then we’ll get it. If we don’t, there’s a high likelihood we won’t get it.

Natalia: How can security professionals convince decision-makers to invest in AppSec?

Tanya: I have a bunch of tricks. The first one is to give presentations on AppSec. I would do lunch and learns. For instance, I sent out an email once to developers: “I’m going to break into a bank at lunch. Who wants to come watch?” and then I showed them this demo of a fake bank. I explained what SQL injection was and I explained how I’d found that vulnerability in one of our apps and what could happen if we didn’t fix it. And they said, “Woah!” Or I’d ask, “Who wants to learn how to hack apps?” and then I showed them a DAST tool. I kept showing them stuff and they started becoming more interested.

Then, I had to interest the developer managers and upper management. Some were still not on board because this was their first AppSec program and my first AppSec program. No one would do what I said, and I had all these penetration test results from a third party, and we had hired four different security assessors and they’d reported big issues that needed to be addressed.

So, I came up with a document called the risk sign-off sheet, which listed all the security risks and exactly what could happen to the business. I was extremely specific about what worried me. I printed it and I had a sign-off for the Director of Security for the whole building and the Chief Information Officer of the entire organization. I went to them and said, “I need your signature that you accept this risk on behalf of your organization.” I put a little note on the risk sign-off sheet that read: Please sign.

The Director of Security called and said, “What is this, Tanya?” and I told him, “No one will fix these things and I don’t have the authority to accept this risk on behalf of the organization. Only you do. I don’t have the authority to make these people fix these things. Only you do. I need you to sign to prove that you were aware of the risks. When we’re in the news, I need to know who’s at fault.” Both the CIO and the Director of Security refused to sign, and I said, “Then you have to give me the authority. I can’t have the responsibility and not have the authority” and it worked. I’ve used it twice at work and it worked.

It’s also important to explain to them using words they understand. The Head of Security, who is in charge of physical security and IT security, was a brilliant man but he didn’t know AppSec. When I explained that because of this vulnerability you can do this with the app, and this is what can result for our customers, he said, “Oh, let’s do something.” I had to learn how to communicate a lot better to do well at AppSec because as a developer, I would just speak developer to other developers.

Learn more

To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How to build a successful application security program appeared first on Microsoft Security.

The biggest challenges—and important role—of application security

March 11th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Tanya Janca, Founder of We Hack Purple Academy and author of the best-selling book “Alice and Bob Learn Application Security.” In this conversation, Tanya shares her insights on application security (AppSec), its role in the security organization, and challenges for AppSec professionals.

Natalia: How do you define application security?

Tanya: Application security, or AppSec, is every activity you do to make sure your software is secure. Let’s say there’s a Java developer that uses Spring Boot, and there’s a vulnerability. They hear a podcast about it and say, “I think we should probably update it because it sounded really scary on the podcast.” That contributes to application security.

However, quite often when people talk about application security, they are talking about a formalized program at a workplace to make sure that the applications being released are reliably secure. We want to make sure every single application gets security attention, and that each gets the same security attention and support. We want to do the best we can to verify that it is at the posture that we have decided is our goal.  Each organization sets that differently, which I talk about a lot in the book I released last year, but basically, application security professionals want to minimize the risk of the scary apps and then bring everything across the board up to a better security posture. That requires talking to almost everyone in IT on a regular basis. I like to think of application security folks as techie social butterflies.

Natalia: How does the security skills gap impact AppSec?

Tanya: I’m obviously biased because I run a training company, but I started it because people kept asking me to train them on how to do it because there is a gap. There is a gap, in general, in IT security with finding someone who has experience and understands best practices rather than just guessing how to train people.

In application security, there tends to be an even wider gap. I started a podcast in August 2020 called Cyber Mentoring Monday. I started it because I run #CyberMentoringMonday on Twitter, and the entire first year, every single person said, “I want to be a penetration tester,” but then I would ask them more questions because I am trying to find them a skilled professional mentor and lots of them didn’t know what AppSec was. They didn’t know what threat hunting was. They didn’t know what risk analysis was. They didn’t know that forensics or incident response existed. We would talk more and it would turn out that there is a different security focus that they’re really interested in, but they had only ever heard of penetration testing.

That was the same for me. I thought you had to be a penetration tester or a risk analyst, but there are a plethora of jobs. I started this podcast so people could figure out what types of jobs they wanted and because I really want to attract more people to our field. A big problem is there is no perfect way to enter AppSec.

Natalia: What are the biggest challenges for those in AppSec?

Tanya: The first AppSec challenge is education, with some developers not understanding how to create secure code. It’s not that they don’t want to. It’s that they don’t understand the risk. They don’t understand what they are supposed to do and a lot of them feel frustrated because they think, “I want my app to be perfect and the best ever,” and they know security is part of that, but they do not have the means to do it.

The second challenge that I see at almost every single workplace is trying to get buy-in. When I did AppSec full time, at certain places I would spend 50 percent of every day just trying to be allowed to do my job. For instance, I want this new tool, and here are the reasons why, and people would respond by saying, “That’s expensive. Developer tools are cheaper.” I would say, “I’m not a developer.” I had to learn how to communicate with management in a way I never had to do as a developer. When I was a developer, I would just say, “It’s going to be two weeks.” If they asked if I could do it faster, I would ask, “Do you want to pay overtime?” and then they would say either yes, and we would do overtime, or they would say no. There is no persuasion.

With AppSec, I had to say, “We have 20 apps. I know you want to spend a zillion dollars on hiring four penetrating testers to test our one mission-critical, super fancy app. But can we hire one for that and could we take the money and look at these legacy things that are literally on fire?” There is a lot of negotiation and persuasion that I had to learn to work in AppSec, which I was surprised about.

Natalia: What is the role of AppSec when it comes to cloud security?

Tanya: I find that everything that’s not taken becomes the AppSec person’s role because no one’s doing it and you’re freaking out about it. If you do AppSec in a company where everything is on-prem, quite often there’s an operations team and they will handle all the infrastructure, so you don’t have to. When you move to the cloud, and especially if you’re working in an org that does DevOps, you must suddenly learn cloud technology, at least the basics.

I’ve talked to many AppSec people and I’ve said, “If you’re moving to the cloud, I know that you think that you’re only in charge of the security of the software, but that’s not true anymore because of the shared responsibility model.” The shared responsibility model means that even if the cloud provider handles patches and the physical security of the data center, if you choose bad configurations, you are responsible for those. So, the first thing you need to do is check out the shared responsibility model to know what your side must do so you don’t miss super important stuff.

When we move to the cloud, understanding shared responsibility is really important and then setting out a process so you get reliable results. Ideally, every phase of the software development lifecycle has one or more security-supporting activities. If you’re using the cloud, there is a decent chance that you’re doing DevOps, in which case the developers become DevOps people. You want to talk to them about securing both development and operations. If they’re just doing development and there is a separate team doing operations, there is a security team helping the operations team but you want to make sure that they receive security assistance. It’s important for developers to understand the basics of cloud security so they don’t accidentally do something terrifying.

With the cloud, one of my favorite things is automation. I used to work for Microsoft and am an Azure fan. Azure has Security Center, which is the best and can automate a bunch of policies and check up on a lot of things for you. Learning how to use it to your advantage is important—learning which parts you want to turn on, which parts you need to budget for in the future, and which parts you’d rather have a third-party tool for. Making those decisions is important for the cloud security team and the AppSec person and then figuring out how to deploy safely and reliably into the cloud.

Keep an eye out for the second part of the interview, as Tanya Janca shares best practices on how to build an application security program and measure its success.

Elevate your security posture with Microsoft Cloud App Security and Microsoft’s Cloud Access Security Broker.

To learn more about Microsoft Security solutions, visit our website.  Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post The biggest challenges—and important role—of application security appeared first on Microsoft Security.

A playbook for modernizing security operations

February 11th, 2021 No comments

The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest post from our new Voice of the Community blog series, Microsoft Product Marketing Manager Natalia Godyla talks with Dave Kennedy, Founder and Chief Technology Officer at Binary Defense. Dave shares his insights on security operations—what these teams need to work effectively, best practices for maturing the security operations center (SOC), as well as the biggest security challenges in the years to come.

Natalia: What are the standard tools, roles, frameworks, and services for a security operations team? What are the basic elements a SecOps team needs to succeed?

Dave: Your security operations team must have visibility into your infrastructure, both on and off-premises. Visibility is key because many of these attacks start with one compromised asset or one compromised credential. They spread across the network and in many cases, they wreak a lot of damage. Your endpoints, network infrastructure, and cloud environments are where a lot of these issues happen. I recommend starting with high-risk areas like your endpoints.

Then, you need somewhere to ingest that data, such as security information and event management systems like Microsoft Azure Sentinel, and to go through log analysis and determine if anything has been compromised.

Also, frameworks like the MITRE ATT&CK framework are a great baseline of saying, well, here are specific attacks that we’ve seen in the wild that are mapped to specific adversaries that are in our industry vertical. That can help you prioritize those, get better at detection, and make sure you have the right logs coming into your environment to build detections.

Natalia: How can a team operationalize the MITRE ATT&CK framework?

Dave: When people first look at the MITRE ATT&CK framework, they freak out because it’s so big, but it’s a treasure trove of information. Everybody was focused on a castle mentality of being able to protect everything but what happens when an attacker is in your environment? Protection is still very important and you want to have protective mechanisms in place, but protection takes time and requires cultural changes in many cases. If you’re doing something like multifactor authentication, you have to communicate that to users.

The MITRE ATT&CK framework tells you what happens when attackers have gotten around your preventive controls. What happens when they execute code onto a system and take other actions that allow them to either extract additional information or move to different systems through lateral movement or post-exploitation scenarios and get access to the data? The MITRE ATT&CK framework is a way to conceptualize exactly what’s happening from an attacker’s standpoint and to build detections around those attack patterns.

With the damage we see, it’s usually several hours, days, or months that an attacker has had access to an environment. If we can shave that time down and detect them in the first few minutes or the first few hours of an attack and shut them down, we’ve saved our company a substantial amount of damage. It’s a framework to help you understand what’s happening in your environment and when unusual activities are occurring so you can respond much more effectively.

Natalia: How much of the MITRE ATT&CK framework should a security team build into their detections? How much should they rely on existing tools to map the framework?

Dave: Many tools today have already done a lot of mapping to things like the MITRE ATT&CK framework, but it’s not comprehensive. If you have an endpoint detection and response product, it may cover only 20 percent of the MITRE ATT&CK framework. Mapping your existing tools and technology to the MITRE ATT&CK framework is a very common practice. For instance, you may have an email gateway that uses sandboxing virtualization techniques that detonate potential malware to see whether it’s effective. That’s one component of your technology stack that can help cover certain components of the MITRE ATT&CK framework. You might have web content filtering that covers a different component of the framework, and then you have endpoint detection and responses (EDRs) that cover a percentage of the endpoint detection pieces.

Technology products can help you shave away the amount of effort that goes into the MITRE ATT&CK framework. It’s really important, though, that organizations map those out to understand where they have gaps and weaknesses. Maybe they need additional technology for better visibility into their environment. I’m a huge fan of the Windows systems service, System Monitor (Sysmon). If you talk to any incident responder, they’ll tell you that if they have access to Sysmon data logs, that’s a treasure trove of information from a threat hunting and incident response perspective.

It’s also important to look at it from an adversary perspective. Not every single adversary in the world wants to target your organization or business. If you’re in manufacturing, for instance, you’re not going to be a target of all adversaries. Look at what the adversaries do and what type of industry vertical they’re targeting so you don’t have to do everything in the MITRE ATT&CK framework. You can whittle the framework down to what’s important for you and build your detections based on which adversaries are most likely to target your organization.

Natalia: If a team has all the basics down and wants to mature their SecOps practices, what do you suggest?

Dave: Most security operations centers are very reactive. Mature organizations are moving toward more proactive hunting or threat hunting. A good example is if you’re sending all of your logs through Azure Sentinel, you can do things like Kusto Query Language and queries in analysis and data sets to look for unusual activity. These organizations go through command line arguments, service creations, parent-child process relationships, or Markov chaining, where you can look at unusual deviations of parent-child process relationships or unusual network activity.

It’s a continual progression starting off with the basics and becoming more advanced over time as you run through new emulation criteria or simulation criteria through either red teaming or automation tools. They can help you get good baselines of your environment and look for unusual traffic that may indicate a potential compromise. Adversary emulations are where you’re imitating a specific adversary attacker through known techniques discovered through data breaches. For example, we look at what happened with the SolarWinds supply chain attack—and kudos to Microsoft for all the research out there—and we say, here are the techniques these specific actors were using, and let’s build detections off of those so they can’t use them again.

More mature organizations already have that in place, and they’re moving toward what we call adversary simulation, where you take a look at an organization’s threat models and you build your attacks and techniques off of how those adversaries would operate. You don’t do it by using the same type of techniques that have previously been discovered. You’re trying to simulate what an attacker would do in an environment and can a blue team identify those.

Natalia: What are best practices for threat hunting?

Dave: Threat hunting varies based on timing and resources. It doesn’t mean you have to have dedicated resources. Threat hunting can be an exercise you conduct once a week, once a month, or once a quarter. It involves going through your data and looking for unusual activity. Look at all service creations. Look at all your command line arguments that are being passed. A large percentage of the MITRE ATT&CK framework can be covered just by parent-child process relationships and command line auditing in the environment. Look at East to West traffic, not just North to South. Look at all your audit logs. Go through Domain Name System (DNS traffic).

For instance, a user was using Outlook and then clicked on an email that opened an Excel document that triggered a macro that then called PowerShell or CMD EXE. That’s an unusual activity that you wouldn’t expect to see from a normal user so let’s hone in on that and figure out what occurred.

You can also conduct more purple teaming engagements, where you have a red team launch attacks and detection teams look through the logs at the same time to build better detections or see where you might have gaps in visibility. Companies that have threat hunting teams make it very difficult for red teamers to get around the different landmines that they’ve laid across the network.

Natalia: What should an incident response workflow look like?

Dave: An alert or unusual activity during a threat hunting exercise is usually raised to somebody to do an analysis. A SOC analyst typically has between 30 seconds and four minutes per alarm to determine whether the alarm is a false positive or something they need to analyze. Obviously, what stands out are things like obfuscation techniques, such as where you have PowerShell with a bunch of code that looks very unusual and obfuscation to try to evade endpoint protection products. Some of the more confusing ones are things like living off the land, which are attacks that leverage legitimate applications that are code signed by the operating system to download files and execute in the future.

A research phase kicks off to see what’s actually going on. If it’s determined that there is malicious activity, usually that’s when incident response kicks in. How bad is it? Have they moved to other systems? Let’s get this machine off the network and figure out everything that’s happening. Let’s do memory analysis. Let’s figure out who the actual attacker was. Can we combine this with red intelligence and determine the specific adversary? What are their capabilities? You start to build the timeline to ensure that you have all the right data and to determine if it’s a major breach or self-contained to one individual system.

We ran several incident response scenarios for customers that were impacted by the supply chain attacks on SolarWinds and the biggest challenge for the customers was their logs didn’t go back that far so it was very difficult for them to say definitively with evidence, that they know what happened.

Natalia: What does an incident responder need to succeed?

Dave: I’d strongly recommend doing an incident response readiness assessment for your organization. I also recommend centralized logging—whether that’s a security information and event management (SIEM) or a data analytics tool or a data lake—that you can comb through. I’m a huge advocate of Sysmon. You can do power execution, command line auditing, DNS traffic, process injection, and parent-child process relationships. I’d also suggest network logs. If you can do full packet captures, which not a lot of organizations can do, that’s also great. If you can pull data packets coming from a secure sockets layer (SSL) or transport layer security (TLS) and do remote memory acquisition, that’s also really important. Can we retrieve artifacts from systems in a very consistent way?

Tabletop exercises can also get executives and IT on the same page about how to handle incidents and work together. Running through very specific types of scenarios can help you figure out where you have gaps or weaknesses. When I was the Chief Security Officer at Diebold, we would run through three to four tabletop exercises a year and include our senior leadership, like our CEO and CFO, twice a year. It was eye-opening for them because they never really understood what goes into incident response and what can happen from a cyber perspective. We’d run through actual simulations and scenarios of very specific attacks and see how they would respond. Those types of scenarios really help build your team’s understanding and determine where you may need better communication, better tooling, or better ways to respond.

Natalia: What other strategies can security operators implement to try to avoid attacks?

Dave: When you look at layered defense, always improving protection is key. You don’t want to just focus on detection because you’re going to be in firefighting mode all the time. The basics really are a big deal: things like multifactor authentication, patch management, and security architecture.

Reducing the attack surface is important, such as with application control and allowed application lists. Application control is probably one of the most effective ways of shutting down most attacks out there today because you have a good baseline of your organization. That applies very consistently to things like the Zero Trust model. Become more of a service provider for your organization versus providing everything for your organization. Reducing your attack surface will eliminate the noise that incident responders or SOC analysts must deal with and allow them to focus on a lot of the high-fidelity type things that we want to see.

One of the things that I see continuously going into a lot of organizations is that they’re just always in firefighting mode, 90 percent of their alarms are false positives, and they’re in alarm fatigue. Their security operations center isn’t improving on detections. You really need somebody on the strategy side to come in and say: Can we lock our users down in a way that doesn’t hinder the business, but also lowers the attack surface?

Natalia: How does vulnerability assessment strategy fit into a SOC strategy?

Dave: Program vulnerabilities and exposures are key opportunities that attackers will use. When we look at historic data breaches, those that use direct exploitation and not phishing were using common vulnerabilities and exposures (CVE) typically of six months or older that allowed them access to a specific system. That makes it really important to reduce attack surfaces and understand where vulnerabilities are so we can make it a lot more difficult for attackers to get in.

It’s not a zero-day attack that’s hitting companies today. It’s out-of-date systems. It’s not patching appropriately. A lot of companies will do well on the operating system side. They’ll patch their Windows machines, their Linux machines, and Apple. But they fail really hard with the third-party applications and especially the web application tier of the house—middleware, microservices. In almost every case, it comes down to ownership of the application. A lot of times, IT will own the operating system platforms and the infrastructure that it’s on, but business owners typically sponsor those applications and so ownership becomes a very murky area. Is it the business owners that own the updates of the applications or does IT? Make sure you have clear owners in charge of making sure patches go out regularly.

If you’re not going through regular vulnerability assessments and looking for the vulnerabilities in your environment, you’re very predisposed to a data breach that attackers would leverage based on missing patches or missing specific security fixes. The first few stages of an attack are the most critical because that’s where most organizations have built their defenses. In the latter phases of post-exploitation, especially as you get to the exfiltration components, most organizations don’t have good detection capabilities. It’s really important to have those detection mechanisms in place ahead of time and ensure those systems are patched.

Natalia: We often discuss the challenges facing security today. Let’s take a different approach. What gives you hope?

Dave: What gives me hope is the shift in security. Ten years ago, we would go into organizations from a penetration testing perspective and just destroy these companies. And then the next year, we’d go in and we’d destroy these companies again. Their focus was always on the technical vulnerabilities and not on what happens after attackers are in your castle. The industry has really shifted toward the mindset of we have to get better at looking for deviations of patterns of behavior to be able to respond much more effectively. The industry is definitely tracking in the right direction, and that really gives me hope.

Learn how Microsoft Security solutions can help modernize Security Operations.

To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post A playbook for modernizing security operations appeared first on Microsoft Security.