Recognizing Security Researchers in 2020

February 3rd, 2020 No comments

Is it too early to talk about the 2020 MSRC Most Valuable Security Researchers? Five months from now, at the end of June, the program period closes for researchers to be considered for inclusion in the Most Valuable Researchers list. The top researcher list will be revealed at Black Hat North America in August. For …

Recognizing Security Researchers in 2020 Read More »

The post Recognizing Security Researchers in 2020 appeared first on Microsoft Security Response Center.

Guarding against supply chain attacks—Part 2: Hardware risks

February 3rd, 2020 No comments

The challenge and benefit of technology today is that it’s entirely global in nature. This reality is brought into focus when companies assess their supply chains, and look for ways to identify, assess, and manage risks across the supply chain of an enterprise. Part 2 of the “Guarding against supply chain attacks” blog series examines the hardware supply chain, its vulnerabilities, how you can protect yourself, and Microsoft’s role in reducing hardware-based attacks.

Unpacking the hardware supply chain

A labyrinth of companies produces mobile phones, Internet of Things (IoT) devices, servers, and other technology products that improve our lives. Product designers outsource manufacturing to one or more vendors. The manufacturer buys components from known suppliers. Each supplier buys parts from its preferred vendors. Other organizations integrate firmware. During peak production cycles, a vendor may subcontract to another company or substitute its known parts supplier with a less familiar one. This results in a complex web of interdependent companies who aren’t always aware that they are connected.

Tampering with hardware using interdiction and seeding

Tampering with hardware is not an easy path for attackers, but because of the significant risks that arise out of a successful compromise, it’s an important risk to track. Bad actors compromise hardware by inserting physical implants into a product component or by modifying firmware. Often these manipulations create a “back door” connection between the device and external computers that the attacker controls. Once the device reaches its final destination, adversaries use the back door to gain further access or exfiltrate data.

But first they must get their hands on the hardware. Unlike software attacks, tampering with hardware requires physical contact with the component or device.

So how do they do it? There are two known methods: interdiction and seeding. In interdiction, saboteurs intercept the hardware while it’s on route to the next factory in the production line. They unpackage and modify the hardware in a secure location. Then they repackage it and get it back in transit to the final location. They need to move quickly, as delays in shipping may trigger red flags.

As hard as interdiction is, it’s not nearly as challenging as seeding. Seeding attacks involve the manipulation of the hardware on the factory floor. To infiltrate a target factory, attackers may pose as government officials or resort to old fashioned bribery or threats to convince an insider to act, or to allow the attacker direct access to the hardware.

Why attack hardware?

Given how difficult hardware manipulation is, you may wonder why an attacker would take this approach. The short answer is that the payoff is huge. Once the hardware is successfully modified, it is extremely difficult to detect and fix, giving the perpetrator long-term access.

  • Hardware makes a good hiding place. Implants are tiny and can be attached to chips, slipped between layers of fiberglass, and designed to look like legitimate components, among other surreptitious approaches. Firmware exists outside the operating system code. Both methods are extremely difficult to detect because they bypass traditional software-based security detection tools.
  • Hardware attacks are more complex to investigate. Attackers who target hardware typically manipulate a handful of components or devices, not an entire batch. This means that unusual device activity may resemble an anomaly rather than a malicious act. The complexity of the supply chain itself also resists easy investigation. With multiple players, some of whom are subcontracted by vendors, discovering what happened and how can be elusive.
  • Hardware issues are expensive to resolve. Fixing compromised hardware often requires complete replacement of the infected servers and devices. Firmware vulnerabilities often persist even after an OS reinstall or a hard drive replacement. Physical replacement cycles and budgets can’t typically accommodate acceleration of such spending if the hardware tampering is widespread.

For more insight into why supply chains are vulnerable, how some attacks have been executed, and why they are so hard to detect, we recommend watching Andrew “bunny” Huang’s presentation, Supply Chain Security: If I were a Nation State…, at BlueHat IL, 2019.

Know your hardware supply chain

What can you do to limit the risk to your hardware supply chain? First: identify all the players, and ask important questions:

  • Where do your vendors buy parts?
  • Who integrates the components that your vendor buys and who manufactures the parts?
  • Who do your vendors hire when they are overloaded?

Once you know who all the vendors are in your supply chain, ensure they have security built into their manufacturing and shipping processes. The National Institute of Standards and Technology (NIST) recommends that organizations “identify those systems/components that are most vulnerable and will cause the greatest organizational impact if compromised.” Prioritize resources to address your highest risks. As you vet new vendors, evaluate their security capabilities and practices as well as the security of their suppliers. You may also want to formalize random, in-depth product inspections.

Microsoft’s role securing the hardware supply chain

As a big player in the technology sector, Microsoft engages with its hardware partners to limit the opportunities for malicious actors to compromise hardware.

Here are just a few examples of contributions Microsoft and its partners have made:

  • Microsoft researchers defined seven properties of secure connected devices. These properties are a useful tool for evaluating IoT device security.
  • The seven properties of secure connected devices informed the development of Azure Sphere, an IoT solution that includes a chip with robust hardware security, a defense-in-depth Linux-based OS, and a cloud security service that monitors devices and responds to emerging threats.
  • Secured-core PCs apply the security best practices of isolation and minimal trust to the firmware layer, or the device core, that underpins the Windows operating system.

Project Cerberus is a collaboration that helps protect, detect, and recover from attacks on platform firmware.

Learn more

The “Guarding against supply chain attacks” blog series untangles some of the complexity surrounding supply chain threats and provides concrete actions you can take to better safeguard your organization. Read Part 1: The big picture for an overview of supply chain risks.

Also, download the Seven properties of secure connected devices and read NIST’s Cybersecurity Supply Chain Risk Management.

Stay tuned for these upcoming posts:

  • Part 3—Examines ways in which software can become compromised.
  • Part 4—Looks at how people and processes can expose companies to risk.
  • Part 5—Summarizes our advice with a look to the future.

In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Guarding against supply chain attacks—Part 2: Hardware risks appeared first on Microsoft Security.

New capabilities for eDiscovery now available

February 3rd, 2020 No comments

With the exponential growth of data, there is a pressing need for broader visibility into ever-increasing case activities that require eDiscovery to extend to chat-based communication and collaboration tools.

New capabilities help you manage eDiscovery in Microsoft Teams including the ability to apply legal hold to files and messages in private Teams channels. In addition, eDiscovery for Yammer is generally available today, while Advanced eDiscovery for Yammer is now available in public preview.

To learn more about all the new updates for eDiscovery in Microsoft 365, read Managing eDiscovery for modern collaboration.

The post New capabilities for eDiscovery now available appeared first on Microsoft Security.

Categories: Microsoft 365 Tags:

Announcing the Xbox Bounty program

January 30th, 2020 No comments

Announcing the new Xbox Bounty. The Xbox bounty program invites gamers, security researchers, and technologists around the world to help identify security vulnerabilities in the Xbox network and services, and share them with the Microsoft Xbox team through Coordinated Vulnerability Disclosure (CVD).

The post Announcing the Xbox Bounty program appeared first on Microsoft Security Response Center.

Changing the Monolith—Part 3: What’s your process?

January 30th, 2020 No comments

In my 25-year journey, I have led security and privacy programs for corporations and provided professional advisory services for organizations of all types. Often, I encounter teams frantically running around in their own silos, trying to connect the dots and yet unsure if those are the right dots. Connecting the dots becomes exponentially difficult in an environment where everyone is trying to achieve a different goal.

Here are a few tips to create teams unified around a common mission:

1. Define the mission and implement it like any other business plan

First, you must know what you are trying to achieve. Are you protecting trade secrets? Limiting reputation damage? Reducing the chance of unauthorized access to sensitive data? Complying with all local, regional, and national data protection laws? Trying to keep employees safe? Keep patients, passengers, customers, and business partners safe? Is the answer “All the above?” Define an order of risk magnitude.

Focus on what success looks like, identify quick wins, and get the opinions of executive leadership. What do they view as success? Don’t settle for unrealistic answers such as “We want 100 percent security.” Explain what is realistic and offer your approach as a business plan.

2. Define success—be able to articulate what it is and how it can be measured

When you start any endeavor, how do you determine when it is finished? While information security has a lifecycle that never ends, certain foundations must be established to foster a culture of security and privacy. Success could look like reducing risk to trade secrets, reducing the impact of third-party risk, or protecting an organization’s reputation.

However, success is defined for your mission, success needs to be measurable. If you can’t summarize success during an elevator pitch, a monthly CEO report, or a board presentation, you haven’t defined it appropriately.

3. Leverage a methodology and make it part of the game plan

Think of the methodology as a game plan. There aren’t enough people, not enough time, and a finite amount of money. Attempting to do everything all at once is a fool’s errand. The moment you know what you’re trying to achieve, it allows you to create a plan of attack. The plan should follow a proven set of steps that move in the right direction.

A popular methodology right now is the Zero Trust model, which has been waiting in the wings for its big debut for over a decade. Zero Trust has made it to the spotlight largely because the conventional perimeter has been deemed a myth. So, what is your approach to achieving security, compliance, and privacy once you have chosen a methodology?

Zero Trust

Reach the optimal state in your Zero Trust journey.

Learn more

4. Market the plan

One of the main hurdles I constantly witness is that the larger the organization, the more isolated the business units—especially in IT. In many cases, cybersecurity leadership does not engage in regular communication within factions of IT. To name a few, there are application development, user support, database teams, infrastructure, and cloud teams. And almost always outside their purview resides HR, Legal, Finance, Procurement, Corporate Communications, and Physical Security departments.

In a previous role, I found success by borrowing employees from some of these other departments. Not only to help build political capital for the cybersecurity team, but to land the security awareness message with the populace and connect with the aforementioned units within IT and business leadership. To do the same, start by building a plan and define your message. Repeat the message often enough so it’s recognized, and people are energized to help drive the mission forward.

5. Teamwork in the form of governance

Once “inter-IT” and business relationships are established, governance can commence—that ultimately means creating process and policy. Involve as many stakeholders as possible and document everything you can. Make everyone aware of their role in the mission and hold them accountable.

Take for example a mobile device policy. Whose input should be solicited? At a minimum, you should involve HR, Legal, Finance, the CIO, and the user community. What do they want and need? When everyone agrees and all requirements are negotiated, it’s amazing how quickly a policy is ratified and becomes official.

Cybersecurity, privacy, compliance, and risk management should be managed like any other business; and any business values process. Without process, product doesn’t get manufactured or shipped, patients don’t heal, and the supply chain grinds to a halt. Without process, there can be no consensus on how to protect the organization.

Stay tuned

Stay tuned for the next installment of my series, Changing the Monolith: People, Process, and Technology. In the meantime, check out the first two posts in the series, on people:

Also, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Changing the Monolith—Part 3: What’s your process? appeared first on Microsoft Security.

Cyber-risk assessments—the solution for companies in the Fourth Industrial Revolution

January 29th, 2020 No comments

Technology continues to play a critical role in shaping the global risks landscape for individuals, governments, and businesses. According to the World Economic Forum’s Global Risks Report 2020, cyberattacks are ranked as the second risk of greatest concern for business globally over the next 10 years. Cyberattacks on critical infrastructure—rated the fifth top risk in 2020 by the expert network—have become the new normal across sectors such as energy, healthcare, and transportation. This confirms a pattern recorded in previous years, with cyber risks consolidating their position alongside environmental risks in the high-impact, high-likelihood quadrant of the report’s Global Risks Landscape.

The cyberattack surface (the totality of all information system and internet exposure) is growing at a rapid pace. In parallel, inherently borderless cybercrime is impacting victims around the globe, with the authority of law enforcement often constrained by jurisdiction and the limitations of legal processes serving to request information beyond national borders. Moreover, cybercrime-as-a-service is a growing business model, as the increasing sophistication of tools on the Darknet makes malicious services more affordable and easily accessible for anyone.

In this context, a cyber-risk assessment is crucial to any organization’s risk management strategy. A cyber-risk assessment provides an informed overview of an organization’s cybersecurity posture and provides data for cybersecurity-related decisions. A well-managed assessment process prevents costly wastes of time, effort, and resources and enables informed decision-making.

Many jurisdictional instruments, including the European Union General Data Protection Regulation (GDPR) and the Data Protection Act (DPA) 2018 in the United Kingdom, require risk assessments to be conducted. Any organization with a digital footprint should have an understanding of their cyber preparedness to ensure that the leadership does not underestimate or overlook risks that could cause significant damage.

Cybersecurity-focused

Yet today, cybersecurity awareness is largely insufficient and there is no standard approach among investors and corporate leadership for evaluating the cybersecurity preparedness of their own, or their portfolio of companies. A cybersecurity-focused culture, based on cyber expertise and awareness, is vital to prioritizing cybersecurity in the investment process.

Including cybersecurity risk assessment in the investment and decision-making process is a rather new approach. The World Economic Forum along with leaders and cybersecurity experts in the investment industry have developed a due care standard to guide investor responsibility in terms of cybersecurity. Tailored to investors’ needs and principle-based, it aims to influence behavioral change rather than merely prescribe specific action to be taken.

According to a World Economic Forum report, adequate cybersecurity expertise is foundational and vital to exercising the cyber due care principles. Investors should ensure requisite cybersecurity expertise is available to them and their investment portfolio companies either internally or through external experts. An investor’s attention to cybersecurity should extend well beyond regulatory compliance and legal obligations and include regular briefings on evolving cyber risks.

Expertise should evolve to guarantee optimal efforts to stay abreast of cybersecurity developments. Overall, investors are urged to foster a cybersecurity awareness culture as most businesses, investment targets, and their key assets are either becoming digital or are already in the digital domain.

Principles to follow

Incorporate a cyber-risk tolerance—The investor incorporates cyber-risk tolerance into their portfolio risk methodology similar to other types of risks monitored, such as financial and management risks. This cyber-risk tolerance threshold indicates the investor’s risk appetite and serves as a reference when making investment decisions.

Conduct cyber due diligence—The investor conducts a business-relevant cybersecurity assessment of the target company in terms of people, processes and technology, as part of the due diligence evaluation and weighs the potential cyber risks against the valuation and strategic benefits of investment.

Determine appropriate incentive structure—In the early stage of investment negotiations, the investor clearly defines ongoing cybersecurity expectations, benchmarks, and incentives for portfolio companies within investment mandates and term sheets.

Secure integration and development—The investor develops and follows systematic action plans to securely integrate the investment target according to the nature of the investment. These action plans span the secure integration of people, processes, and technology, as well as define the support that the investor will offer to develop the target’s cybersecurity capabilities. The extent of integration may vary according to the type of investor (financial vs. strategic) and the motivation for the investment.

Regularly review and encourage collaboration—The investor reviews the cybersecurity capabilities of its portfolio companies on a regular basis. These reviews assess adherence to the cybersecurity requirements set out by the investor and serve as a basis for sharing cybersecurity challenges, best practices, and lessons learned across the investor’s portfolio.

Investing in innovation is one way to reduce the likelihood of unexpected disruption, identify “blue oceans” (markets associated with high potential profits), and contribute to achieving desired returns. Whereas entrepreneurs drive innovation and experimentation, investors play an important role in helping them to grow, optimize, and mature their businesses. Helping entrepreneurs to prioritize cybersecurity is one significant way in which investors can increase the likelihood of long-term success and a product’s resilience in the market, thereby strengthening the brand name and consumer trust.

When investing in a technology company, investors need to consider the degree of cyber-risk exposure to understand how to manage and mitigate it. Investors play a critical role in leading their investment portfolio companies towards better security consideration and implementation.

Cyber expertise comprises not only technical know-how but also cybersecurity awareness in governance and investment. The principles and the cybersecurity due diligence assessment framework are designed for investors who want to include cybersecurity among the criteria for their investment consideration and decision. One of the main barriers to prioritizing cybersecurity is the lack of cyber expertise in the market. Yet every investor who understands the importance of cybersecurity in our technological age can ask the right questions to assess and understand a target’s cybersecurity preparedness, thus play a significant role in securing our shared digital future.

The post Cyber-risk assessments—the solution for companies in the Fourth Industrial Revolution appeared first on Microsoft Security.

Afternoon Cyber Tea—The State of Cybersecurity: How did we get here? What does it mean?

January 29th, 2020 No comments

Every year the number and scale of cyberattacks grows. Marc Goodman, a global security strategist, futurist, and author of the book, Future Crimes: Everything is Connected, Everyone is Vulnerable, and What We Can Do About It, thinks a lot about how we got here and what it means, which is why he was invited to be the first guest on my podcast series, Afternoon Cyber Tea with Ann Johnson.

Marc has a long history in law enforcement, starting as a police officer in Los Angeles and more recently as a consultant to Interpol, The United Nations, NATO, the U.S. Federal Government, and local U.S. law enforcement. His background and experience give him a shrewd perspective on the threats that governments and businesses face now and in the future.

In our conversation, Marc and I discussed what drives cyberattack numbers to climb every year and why data is a risk factor. We also peered into the future and examined some of the threats that businesses and governments should begin preparing for now. Did you know that today the average household has at least 15 connected devices? That number is expected to rapidly grow to 50. We talked about what a truly connected world means for defenders. I really appreciate the way Marc was able to humanize the risks associated with the Internet of Things (IoT).

Most importantly we identified steps, such as Multi-Factor Authentication (MFA) and passwordless technology, that organizations can take to mitigate these threats. I hope you will take a moment to listen in on our conversation. Listen to the first episode of Afternoon Cyber Tea with Ann Johnson on Apple Podcasts or Podcast One.

What’s next

In this important cyber series, I’ll talk with cybersecurity influencers about trends shaping the threat landscape and explore the risk and promise of systems powered by artificial intelligence (AI), IoT, and other emerging tech.

You can listen to Afternoon Cyber Tea with Ann Johnson on:

  • Apple Podcasts—You can also download the episode by clicking the Episode Website link.
  • Podcast One—Includes option to subscribe, so you’re notified as soon as new episodes are available.
  • CISO Spotlight page—Listen alongside our CISO Spotlight episodes where customers and security experts discuss similar topics such as Zero Trust, compliance, going passwordless, and more.

Also bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity. Or reach out to me on LinkedIn or Twitter if you have guest or topic suggestions.

The post Afternoon Cyber Tea—The State of Cybersecurity: How did we get here? What does it mean? appeared first on Microsoft Security.

5 identity priorities for 2020

January 28th, 2020 No comments

Today, Joy Chick, Corporate Vice President of Identity, shared five priorities central to security that organizations should prioritize in 2020 as they digitally transform. These priorities are based on many conversations with our customers, including:

  1. Connect all applications and cloud resources to improve access controls and the user experience.
  2. Empower developers to integrate identity into their apps and improve security.
  3. Go passwordless to make security effortless for users.
  4. Enable boundary-less collaboration and automated access lifecycle for all users.
  5. Start your Zero Trust journey to protect your organization as you digitally transform.

To learn more about these priorities, and how decentralized identity is poised to offer greater verifiability and privacy, read Joy’s post, 5 identity priorities for 2020—preparing for what’s next.

Also bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post 5 identity priorities for 2020 appeared first on Microsoft Security.

Data privacy is about more than compliance—it’s about being a good world citizen

January 28th, 2020 No comments

Happy Data Privacy Day! Begun in 2007 in the European Union (E.U.) and adopted by the U.S. in 2008, Data Privacy Day is an international effort to encourage better protection of data and respect for privacy. It’s a timely topic given the recent enactment of the California Consumer Privacy Act (CCPA). Citizens and governments have grown concerned about the amount of information that organizations collect, what they are doing with the data, and ever-increasing security breaches. And frankly, they’re right. It’s time to improve how organizations manage data and protect privacy.

Let’s look at some concrete steps you can take to begin that process in your organization. But first, a little context.

The data privacy landscape

Since Data Privacy Day commenced in 2007, the amount of data we collect has increased exponentially. In fact we generate “2.5 quintillion bytes of data per day!” Unfortunately, we’ve also seen a comparable increase in security incidents. There were 5,183 breaches reported in the first nine months of 2019, exposing a total of 7.9 billion records. According to the RiskBased Data Breach QuickView Report 2019 Q3, “Compared to the 2018 Q3 report, the total number of breaches was up 33.3 percent and the total number of records exposed more than doubled, up 112 percent.”

In response to these numbers, governments across the globe have passed or are debating privacy regulations. A few of the key milestones:

  • Between 1998 and 2000, The E.U. and the U.S. negotiated Safe Harbor, which were privacy principles that governed how to protect data that is transferred across the Atlantic.
  • In 2015, the European Court of Justice overturned Safe Harbor.
  • In 2016, Privacy Shield replaced Safe Harbor and was approved by the courts.
  • In 2018, the General Data Protection Regulation (GDPR) took effect in the E.U.
  • On January 1, 2020, CCPA took effect for businesses that operate in California.

Last year, GDPR levied 27 fines for a total of € 428,545,407 (over $472 million USD). California will also levy fines for violations of CCPA. Compliance is clearly important if your business resides in a region or employs persons in regions protected by privacy regulation. But protecting privacy is also the right thing to do. Companies who stand on the side of protecting the consumer’s data can differentiate themselves and earn customer loyalty.

Don’t build a data privacy program, build a data privacy culture

Before you get started, recognize that improving how your organization manages personal data, means building a culture that respects privacy. Break down siloes and engage people across the company. Legal, Marketing, SecOps, IT, Senior Managers, Human Resources, and others all play a part in protecting data.

Embrace the concept that privacy is a fundamental human rightPrivacy is recognized as a human right in the U.N. Declaration of Human Rights and the International Covenant on Civil and Political Rights, among other treaties. It’s also built into the constitutions and governing documents of many countries. As you prepare your organization to comply with new privacy regulations, let this truth guide your program.

Understand the data you collect, where it is stored, how it is used, and how it is protected—This is vital if you’re affected by CCPA or GDPR, which require that you disclose to users what data you are collecting and how you are using it. You’re also required to provide data or remove it upon customer request. And I’m not just talking about the data that customers submit through a form. If you’re using a tool to track and collect online user behavior that also counts.

This process may uncover unused data. If so, revise your data collection policies to improve the quality of your data.

Determine which regulations apply to your business—Companies within the E.U. that do business with customers within the E.U., or employ E.U. citizens, are subject to GDPR. CPPA applies to companies doing business within California and meet one of the following requirements:

  • A gross annual revenue of more than $25 million.
  • Derive more than 50 percent of their annual income from the sale of California consumer personal information or
  • Buy, sell, or share the personal information of more than 50,000 California consumers annually.

Beyond California and the E.U., India is debating a privacy law, and Brazil’s regulations, Lei Geral de Proteção de Dados (LGPD), will go into effect in August 2020. There are also several privacy laws in Asia that may be relevant.

Hire, train, and connect people across your organization—To comply with privacy regulations, you’ll need processes and people in place to address these two requirements:

  1. Californians and E.U. citizens are guaranteed the right to know what personal information is being collected about them; to know whether their personal information is sold or disclosed and to whom; and to access their personal information.
  2. Organizations will be held accountable to respond to consumers’ personal information access requests within a finite timeframe, for both regulations.

The GDPR requires that all companies hire a Data Protection Officer to ensure compliance with the law. But to create an organization that respects privacy, go beyond compliance. New projects and initiatives should be designed with privacy in mind from the ground up. Marketing will need to include privacy in campaigns, SecOps and IT will need to ensure proper security is in place to protect data that is collected. Build a cross-discipline team with privacy responsibilities, and institute regular training, so that your employees understand how important it is.

Be transparent about your data collection policies—Data regulations require that you make clear your data collection policies and provide users a way to opt out (CCPA) or opt in (GDPR). Your privacy page should let users know why the data collection benefits them, how you will use their data, and to whom you sell it. If they sell personal information, California businesses will need to include a “Do not sell my personal information” call to action on the homepage.

A transparent privacy policy creates an opportunity for you to build trust with your customers. Prove that you support privacy as a human right and communicate your objectives in a clear and understandable way. Done well, this approach can differentiate you from your competitors.

Extend security risk management practices to your supply chain—Both the CCPA and the GDPR require that organizations put practices in place to protect customer data from malicious actors. You also must report breaches in a timely manner. If you’re found in noncompliance, large fees can be levied.

As you implement tools and processes to protect your data, recognize that your supply chain also poses a risk. Hackers attack software updates, software frameworks, libraries, and firmware as a means of infiltrating otherwise vigilant organizations. As you strengthen your security posture to better protect customer data, be sure to understand your entire hardware and software supply chain. Refer to the National Institute of Standards and Technology for best practices. Microsoft guidelines for reducing your risk from open source may also be helpful.

Microsoft can help

Microsoft offers several tools and services to help you comply with regional and country level data privacy regulations, including CCPA and GDPR. Bookmark the Security blog and the Compliance and security series to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity and connect with me on LinkedIn.

The post Data privacy is about more than compliance—it’s about being a good world citizen appeared first on Microsoft Security.

New privacy assessments now included in Microsoft Compliance Score

January 27th, 2020 No comments

Keeping up with rapidly changing regulatory requirements has become one of the biggest challenge’s organizations face today. Just as companies finished preparing for the General Data Protection Regulation (GDPR), California’s privacy regulation—California Consumer Privacy Act (CCPA)—went into effect on January 1, 2020. And in August 2020, Brazil’s own GDPR-like regulation, Lei Geral de Proteção de Dados (LGPD), will start to be enforced.

To help you take a proactive role in getting ahead of privacy compliance, we’re announcing new privacy-focused assessments available in the public preview of Microsoft Compliance Score. These new assessments help you assess your compliance posture and provide guidance to implement more effective controls for CCPA, LGPD, ISO/IEC 27701:2019, and SOC 1 Type 2 and SOC 2 Type 2.

To learn more, read Microsoft Compliance Score helps address the ever-changing data privacy landscape.

The post New privacy assessments now included in Microsoft Compliance Score appeared first on Microsoft Security.

Categories: Compliance, Data Privacy Tags:

Azure Security Benchmark—90 security and compliance best practices for your workloads in Azure

January 23rd, 2020 No comments

The Azure security team is pleased to announce that the Azure Security Benchmark v1 (ASB) is now available. ASB is a collection of over 90 security best practices recommendations you can employ to increase the overall security and compliance of all your workloads in Azure.

The ASB controls are based on industry standards and best practices, such as Center for Internet Security (CIS). In addition, ASB preserves the value provided by industry standard control frameworks that have an on-premises focus and makes them more cloud centric. This enables you to apply standard security control frameworks to your Azure deployments and extend security governance practices to the cloud.

ASB v1 includes 11 security controls inspired by, and mapped to, the CIS 7.1 control framework. Over time we’ll add mappings to other frameworks, such as NIST.

ASB also makes it possible to improve the consistency of security documentation for all Azure services by creating a framework where all security recommendations for Azure services are represented in the same format, using the common ASB framework.

ASB includes the following controls:

Documentation for each of the controls contains mappings to industry standard benchmarks (such as CIS), details/rationale for the recommendations, and link(s) to configuration information that will enable the recommendation.

Image showing protection of critical web applications. Azure ID, CIS IDs, and Responsibility.

You can find the full set of controls and the recommendations at the Azure Security Benchmark website. To learn more, see Microsoft intelligent security solutions.

Image of Azure security benchmarks documentation in the Azure security center.

ASB is integrated with Azure Security Center allowing you to track, report, and assess your compliance against the benchmark by using the Security Center compliance dashboard. It has a tab like those you see below. In addition, the ASB impacts Secure Score in Azure Security Center for your subscriptions.

Image showing regulatory compliance standards in the Azure security center.

ASB is the foundation for future Azure service security baselines, which will provide a view of benchmark recommendations that are contextualized for each Azure service. This will make it easier for you to implement the ASB for the Azure services that you’re actually using. Also, keep an eye out our release of mappings to the NIST and other security frameworks.

Send us your feedback

We welcome your feedback on ASB! Please complete the Azure Security Benchmark feedback form. Also, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Azure Security Benchmark—90 security and compliance best practices for your workloads in Azure appeared first on Microsoft Security.

Categories: Azure Security, Compliance, Secure Score Tags:

Microsoft and Zscaler help organizations implement the Zero Trust model

January 23rd, 2020 No comments

While digital transformation is critical to business innovation, delivering security to cloud-first, mobile-first architectures requires rethinking traditional network security solutions. Some businesses have been successful in doing so, while others still remain at risk of very costly breaches.

MAN Energy Solutions, a leader in the marine, energy, and industrial sectors, has been driving cloud transformation across their business. As with any transformation, there were challenges—as they began to adopt cloud services, they quickly realized that the benefits of the cloud would be offset by poor user experience, increasing appliance and networking costs, and an expanded attack surface.

In 2017, MAN Energy Solutions implemented “Blackcloud”—an initiative that establishes secure, one-to-one connectivity between each user and the specific private apps that the user is authorized to access, without ever placing the user on the larger corporate network. A virtual private network (VPN) is no longer necessary to connect to these apps. This mitigates lateral movement of bad actors or malware.

This approach is based on the Zero Trust security model.

Understanding the Zero Trust model

In 2019, Gartner released a Market Guide describing its Zero Trust Network Access (ZTNA) model and making a strong case for its efficacy in connecting employees and partners to private applications, simplifying mergers, and scaling access. Sometimes referred to as software-defined perimeter, the ZTNA model includes a “broker” that mediates connections between authorized users and specific applications.

The Zero Trust model grants application access based on identity and context of the user, such as date/time, geolocation, and device posture, evaluated in real-time. It empowers the enterprise to limit access to private apps only to the specific users who need access to them and do not pose any risk. Any changes in context of the user would affect the trust posture and hence the user’s ability to access the application.

Access governance is done via policy and enabled by two end-to-end, encrypted, outbound micro-tunnels that are spun on-demand (not static IP tunnels like in the case of VPN) and stitched together by the broker. This ensures apps are never exposed to the internet, thus helping to reduce the attack surface.

As enterprises witness and respond to the impact of increasingly lethal malware, they’re beginning to transition to the Zero Trust model with pilot initiatives, such as securing third-party access, simplifying M&As and divestitures, and replacing aging VPN clients. Based on the 2019 Zero Trust Adoption Report by Cybersecurity Insiders, 59 percent of enterprises plan to embrace the Zero Trust model within the next 12 months.

Implement the Zero Trust model with Microsoft and Zscaler

Different organizational requirements, existing technology implementations, and security stages affect how the Zero Trust model implementation takes place. Integration between multiple technologies, like endpoint management and SIEM, helps make implementations simple, operationally efficient, and adaptive.

Microsoft has built deep integrations with Zscaler—a cloud-native, multitenant security platform—to help organizations with their Zero Trust journey. These technology integrations empower IT teams to deliver a seamless user experience and scalable operations as needed, and include:

Azure Active Directory (Azure AD)—Enterprises can leverage powerful authentication tools—such as Multi-Factor Authentication (MFA), conditional access policies, risk-based controls, and passwordless sign-in—offered by Microsoft, natively with Zscaler. Additionally, SCIM integrations ensure adaptability of user access. When a user is terminated, privileges are automatically modified, and this information flows automatically to the Zscaler cloud where immediate action can be taken based on the update.

Microsoft Endpoint Manager—With Microsoft Endpoint Manager, client posture can be evaluated at the time of sign-in, allowing Zscaler to allow or deny access based on the security posture. Microsoft Endpoint Manager can also be used to install and configure the Zscaler app on managed devices.

Azure Sentinel—Zscaler’s Nanolog Streaming Service (NSS) can seamlessly integrate with Azure to forward detailed transactional logs to the Azure Sentinel service, where they can be used for visualization and analytics, as well as threat hunting and security response.

Implementation of the Zscaler solution involves deploying a lightweight gateway software, on endpoints and in front of the applications in AWS and/or Azure. Per policies defined in Microsoft Endpoint Manager, Zscaler creates secure segments between the user devices and apps through the Zscaler security cloud, where brokered micro-tunnels are stitched together in the location closest to the user.

Infographic showing Zscaler Security and Policy Enforcement. Internet Destinations and Private Apps appear in clouds. Azure Sentinel, Microsoft Endpoint Manager, and Azure Active Directory appear to the right and left. In the center is a PC.

If you’d like to learn more about secure access to hybrid apps, view the webinar on Powering Fast and Secure Access to All Apps with experts from Microsoft and Zscaler.

Rethink security for the cloud-first, mobile-first world

The advent of cloud-based apps and increasing mobility are key drivers forcing enterprises to rethink their security model. According to Gartner’s Market Guide for Zero Trust Network Access (ZTNA) “by 2023, 60 percent of enterprises will phase out most of their remote access VPNs in favor of ZTNA.” Successful implementation depends on using the correct approach. I hope the Microsoft-Zscaler partnership and platform integrations help you accomplish the Zero Trust approach as you look to transform your business to the cloud.

For more information on the Zero Trust model, visit the Microsoft Zero Trust page. Also, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Microsoft and Zscaler help organizations implement the Zero Trust model appeared first on Microsoft Security.

Access Misconfiguration for Customer Support Database

January 22nd, 2020 No comments

Today, we concluded an investigation into a misconfiguration of an internal customer support database used for Microsoft support case analytics. While the investigation found no malicious use, and although most customers did not have personally identifiable information exposed, we want to be transparent about this incident with all customers and reassure them that we are taking …

Access Misconfiguration for Customer Support Database Read More »

The post Access Misconfiguration for Customer Support Database appeared first on Microsoft Security Response Center.

Categories: Misconfiguration Tags:

sLoad launches version 2.0, Starslord

January 21st, 2020 No comments

sLoad, the PowerShell-based Trojan downloader notable for its almost exclusive use of the Windows BITS service for malicious activities, has launched version 2.0. The new version comes on the heels of a comprehensive blog we published detailing the malware’s multi-stage nature and use of BITS as alternative protocol for data exfiltration and other behaviors.

With the new version, sLoad has added the ability to track the stage of infection on every affected machine. Version 2.0 also packs an anti-analysis trick that could identify and isolate analyst machines vis-à-vis actual infected machines.

We’re calling the new version “Starslord” based on strings in the malware code, which has clues indicating that the name “sLoad” may have been derived from a popular comic book superhero.

We discovered the new sLoad version over the holidays, in our continuous monitoring of the malware. New sLoad campaigns that use version 2.0 follow an attack chain similar to the previous version, with some updates, including dropping the dynamic list of command-and-control (C2) servers and upload of screenshots.

Tracking the stage of infection

With the ability to track the stage of infection, malware operators with access to the Starslord backend could build a detailed view of infections across affected machines and segregate these machines into different groups.

The tracking mechanism exists in the final-stage, which, as with the old version, loops infinitely (with sleep interval of 2400 seconds, higher than the 1200 seconds in version 1.0). In line with the previous version, at every iteration of the final stage, the malware uses a download BITS job to exfiltrate stolen system information and receive additional payloads from the active C2 server.

As we noted in our previous blog, creating a BITS job with an extremely large RemoteURL parameter that includes non-encrypted system information, as the old sLoad version did, stands out and is relatively easy to detect. However, with Starslord, the system information is encoded into Base64 data before being exfiltrated.

The file received by Starslord in response to the exfiltration BITS job contains a tuple of three values separated by an asterisk (*):

  • Value #1 is a URL to download additional payload using a download BITS job
  • Value #2 specifies the action, which can be any of the following, to be taken on the payload downloaded from the URL in value#1:
    • “eval” – Run (possibly very large) PowerShell scripts
    • “iex” – Load and invoke (possibly small) PowerShell code
    • “run” – Download encoded PE file, decode using exe, and run the decoded executable
  • Value #3 is an integer that can signify the stage of infection for the machine

Supplying the payload URL as part of value #1 allows the malware infrastructure to house additional payloads on different servers from the active C2 servers responding to the exfiltration BITS jobs.

Value#3 is the most noteworthy component in this setup. If the final stage succeeds in downloading additional payload using the URL provided in value #1 and executing it as specified by the command in value #2, then a variable is used to form the string “td”:”<value#3>”,”tds”:”3”. However, if the final stage fails to download and execute the payload, then the string formed is “td”:”<value #3>”,”tds”:”4”.

The infinite loop ensures that the exfiltration BITS jobs are created at a fixed interval. The backend infrastructure can then pick up the pulse from each infected machine. However, unlike the previous version, Starslord includes the said string in succeeding iterations of data exfiltration. This means that the malware infrastructure is always aware of the exact stage of the infection for a specific affected machine. In addition, since the numeric value for value #3 in the tuple is always governed by the malware infrastructure, malware operators can compartmentalize infected hosts and could potentially set off individual groups on unique infection paths. For example, when responding to exfiltration BITS jobs, malware operators can specify a different URL (value #1) and action (value #2) for each numeric value for value #3 of the tuple, essentially deploying a different malware payload for different groups.

Anti-analysis trap

Starslord comes built-in with a function named checkUniverse, which is in-fact an anti-analysis trap.

As mentioned in our previous blog post, the final stage of sLoad is a piece of PowerShell code obtained by decoding one of the dropped .ini files. The PowerShell code appears in the memory as a value assigned to a variable that is then executed using the Invoke-Expression cmdlet. Because this is a huge piece of decrypted PowerShell code that never hits the disk, security researchers would typically dump it into a file on the disk for further analysis.

The sLoad dropper PowerShell script drops four files:

  • a randomly named .tmp file
  • a randomly named .ps1 file
  • a ini file
  • a ini file

It then creates a scheduled task to run the .tmp file every 3 minutes, similar to the previous version. The .tmp file is a proxy that does nothing but run the .ps1 file, which decrypts the contents of main.ini into the final stage. The final stage then decrypts contents of domain.ini to obtain active C2 and perform other activities as documented.

As a unique anti-analysis trap, Starslord ensures that the .tmp and.ps1 files have the same random name. When an analyst dumps the decrypted code of the final stage into a file in the same folder as the .tmp and .ps1 files, the analyst could end up naming it something other than the original random name. When this dumped code is run from such differently named file on the disk, a function named checkUniverse returns the value 1, and the analyst gets trapped:

What comes next is not very desirable for a security researcher: being profiled by the malware operator.

If the host belongs to a trapped analyst, the file downloaded from the backend in response to the exfiltration BITS job, if any, is discarded and overwritten by the following new tuple:

hxxps://<active C2>/doc/updx2401.jpg*eval*-1

In this case, the value #1 of the tuple is a URL that’s known to the backend for being associated with trapped hosts. BITS jobs from trapped hosts don’t always get a response. If they do, it’s a copy of the dropper PowerShell script. This could be to create an illusion that the framework is being updated as the URL in value #1 of the tuple suggests (hxxps://<active C2>/doc/updx2401.jpg).

However, the string that is included in all successive exfiltration BITS jobs from such host is “td”:”-1”,”tds”:”3”, eventually leading to all such hosts getting grouped under value “td”:”-1”. This forms the group of all trapped machines that are never delivered a payload. For the rest, so far, evidence suggests that it has been delivering the file infector Ramnit intermittently.

Durable protection against evolving malware

sLoad’s multi-stage attack chain, use of mutated intermediate scripts and BITS as an alternative protocol, and its polymorphic nature in general make it a piece malware that can be quite tricky to detect. Now, it has evolved into a new and polished version Starlord, which retains sLoads most basic capabilities but does away with spyware capabilities in favor of new and more powerful features, posing even higher risk.

Starslord can track and group affected machines based on the stage of infection, which can allow for unique infection paths. Interestingly, given the distinct reference to a fictional superhero, these groups can be thought of as universes in a multiverse. In fact, the malware uses a function called checkUniverse to determine if a host is an analyst machine.

Microsoft Threat Protection defends customers from sophisticated and continuously evolving threats like sLoad using multiple industry-leading security technologies that protect various attack surfaces. Through signal-sharing across multiple Microsoft services, Microsoft Threat Protection delivers comprehensive protection for identities, endpoints, data, apps, and infrastructure.

On endpoints, behavioral blocking and containment capabilities in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) ensure durable protection against evolving threats. Through cloud-based machine learning and data science informed by threat research, Microsoft Defender ATP can spot and stop malicious behaviors from threats, both old and new, in real-time.

 

 

Sujit Magar

Microsoft Defender ATP Research Team

The post sLoad launches version 2.0, Starslord appeared first on Microsoft Security.

How companies can prepare for a heightened threat environment

January 20th, 2020 No comments

With high levels of political unrest in various parts of the world, it’s no surprise we’re also in a period of increased cyber threats. In the past, a company’s name, political affiliations, or religious affiliations might push the risk needle higher. However, in the current environment any company could be a potential target for a cyberattack. Companies of all shapes, sizes, and varying security maturity are asking what they could and should be doing to ensure their safeguards are primed and ready. To help answer these questions, I created a list of actions companies can take and controls they can validate in light of the current level of threats—and during any period of heightened risk—through the Microsoft lens:

  • Implement Multi-Factor Authentication (MFA)—It simply cannot be said enough—companies need MFA. The security posture at many companies is hanging by the thread of passwords that are weak, shared across social media, or already for sale. MFA is now the standard authentication baseline and is critical to basic cyber hygiene. If real estate is “location, location, location,” then cybersecurity is “MFA, MFA, MFA.” To learn more, read How to implement Multi-Factor Authentication (MFA).
  • Update patching—Check your current patch status across all environments. Make every attempt to patch all vulnerabilities and focus on those with medium or higher risk if you must prioritize. Patching is critically important as the window between discovery and exploit of vulnerabilities has shortened dramatically. Patching is perhaps your most important defense and one that, for the most part, you control. (Most attacks utilize known vulnerabilities.)
  • Manage your security posture—Check your Secure Score and Compliance Score for Office 365, Microsoft 365, and Azure. Also, take steps to resolve all open recommendations. These scores will help you to quickly assess and manage your configurations. See “Resources and information for detection and mitigation strategies” below for additional information. (Manage your scores over time and use them as a monitoring tool for unexpected consequences from changes in your environment.)
  • Evaluate threat detection and incident response—Increase your threat monitoring and anomaly detection activities. Evaluate your incident response from an attacker’s perspective. For example, attackers often target credentials. Is your team prepared for this type of attack? Are you able to engage left of impact? Consider conducting a tabletop exercise to consider how your organization might be targeted specifically.
  • Resolve testing issues—Review recent penetration test findings and validate that all issues were closed.
  • Validate distributed denial of service (DDoS) protection—Does your organization have the protection you need or stable access to your applications during a DDoS attack? These attacks have continued to grow in frequency, size, sophistication, and impact. They often are utilized as a “cyber smoke screen” to mask infiltration attacks. Your DDoS protection should be always on, automated for network layer mitigation, and capable of near real-time alerting and telemetry.
  • Test your resilience—Validate your backup strategies and plans, ensuring offline copies are available. Review your most recent test results and conduct additional testing if needed. If you’re attacked, your offline backups may be your strongest or only lifeline. (Our incident response teams often find companies are surprised to discover their backup copies were accessible online and were either encrypted or destroyed by the attacker.)
  • Prepare for incident response assistance—Validate you have completed any necessary due diligence and have appropriate plans to secure third-party assistance with responding to an incident/attack. (Do you have a contract ready to be signed? Do you know who to call? Is it clear who will decide help is necessary?)
  • Train your workforce—Provide a new/specific round of training and awareness information for your employees. Make sure they’re vigilant to not click unusual links in emails and messages or go to unusual or risky URLs/websites, and that they have strong passwords. Emphasize protecting your company contributes to the protection of the financial economy and is a matter of national security.
  • Evaluate physical security—Step up validation of physical IDs at entry points. Ensure physical reviews of your external perimeter at key offices and datacenters are being carried out and are alert to unusual indicators of access attempts or physical attacks. (The “see something/say something” rule is critically important.)
  • Coordinate with law enforcement—Verify you have the necessary contact information for your local law enforcement, as well as for your local FBI office/agent (federal law enforcement). (Knowing who to call and how to reach them is a huge help in a crisis.)

The hope, of course, is there will not be any action against any company. Taking the actions noted above is good advice for any threat climate—but particularly in times of increased risk. Consider creating a checklist template you can edit as you learn new ways to lower your risk and tighten your security. Be sure to share your checklist with industry organizations such as FS-ISAC. Finally, if you have any questions, be sure to reach out to your account team at Microsoft.

Resources and information for detection and mitigation strategies

In addition, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

About the author

Lisa Lee is a former U.S. banking regulator who helped financial institutions of all sizes prepare their defenses against cyberattacks and reduce their threat landscape. In her current role with Microsoft, she advises Chief Information Security Officers (CISOs) and other senior executives at large financial services companies on cybersecurity, compliance, and identity. She utilizes her unique background to share insights about preparing for the current cyber threat landscape.

The post How companies can prepare for a heightened threat environment appeared first on Microsoft Security.

Changing the monolith—Part 2: Whose support do you need?

January 16th, 2020 No comments

In Changing the monolith—Part 1: Building alliances for a secure culture, I explored how security leaders can build alliances and why a commitment to change must be signaled from the top. But whose support should you recruit in the first place? In Part 2, I address considerations for the cybersecurity team itself, the organization’s business leaders, and the employees whose buy-in is critical.

Build the right cybersecurity team

It could be debated that the concept of a “deep generalist” is an oxymoron. The analogy I frequently find myself making is you would never ask a dermatologist to perform a hip replacement. A hip replacement is best left to an orthopedic surgeon who has many hours of hands-on experience performing hip replacements. This does not lessen the importance of the dermatologist, who can quickly identify and treat potentially lethal diseases such as skin cancer.

Similarly, not every cybersecurity and privacy professional is deep in all subjects such as governance, technology, law, organizational dynamics, and emotional intelligence. No person is born a specialist.

If you are looking for someone who is excellent at threat prevention, detection, and incident response, hire someone who specializes in those specific tasks and has demonstrated experience and competency. Likewise, be cautious of promoting cybersecurity architects to the role of Chief Information Security Officer (CISO) if they have not demonstrated strategic leadership with the social aptitude to connect with other senior leaders in the organization. CISOs, after all, are not technology champions as much as they are business leaders.

Keep business leaders in the conversation

Leaders can enhance their organizations’ security stance by sending a top-down message across all business units that “security begins with me.” One way to send this message is to regularly brief the executive team and the board on cybersecurity and privacy risks.

Image of three coworkers working at a desk in an office.

Keep business leaders accountable about security.

These should not be product status reports, but briefings on key performance indicators (KPI) of risk. Business leaders must inform what the organization considers to be its top risks.

Here are three ways to guide these conversations:

  1. Evaluate the existing cyber-incident response plan within the context of the overall organization’s business continuity plan. Elevate cyber-incident response plans to account for major outages, severe weather, civil unrest, and epidemics—which all place similar, if not identical, stresses to the business. Ask leadership what they believe the “crown jewels” to be, so you can prioritize your approach to data protection. The team responsible for identifying the “crown jewels” should include senior management from the lines of businesses and administrative functions.
  2. Review the cybersecurity budget with a business case and a strategy in mind. Many times, security budgets take a backseat to other IT or business priorities, resulting in companies being unprepared to deal with risks and attacks. An annual review of cybersecurity budgets tied to what looks like a “good fit” for the organization is recommended.
  3. Reevaluate cyber insurance on an annual basis and revisit its use and requirements for the organization. Ensure that it’s effective against attacks that could be considered “acts of war,” which might otherwise not be covered by the organization’s policy. Review your policy and ask: What happens if the threat actor was a nation state aiming for another nation state, placing your organization in the crossfire?

Gain buy-in through a frictionless user experience

Shadow IT” is a persistent problem when there is no sanctioned way for users to collaborate with the outside world. Similarly, users save and hoard emails when, in response to an overly zealous data retention policy, their emails are deleted after 30 days.

Digital transformation introduces a sea of change in how cybersecurity is implemented. It’s paramount to provide the user with the most frictionless user experience available, adopting mobile-first, cloud-first philosophies.

Ignoring the user experience in your change implementation plan will only lead users to identify clever ways to circumvent frustrating security controls. Look for ways to prioritize the user experience even while meeting security and compliance goals.

Incremental change versus tearing off the band-aid

Imagine slowly replacing the interior and exterior components of your existing vehicle one by one until you have a “new” car. It doesn’t make sense: You still have to drive the car, even while the replacements are being performed!

Similarly, I’ve seen organizations take this approach in implementing change, attempting to create a modern workplace over a long period of time. However, this draws out complex, multi-platform headaches for months and years, leading to user confusion, loss of confidence in IT, and lost productivity. You wouldn’t “purchase” a new car this way; why take this approach for your organization?

Rather than mixing old parts with new parts, you would save money, shop time, and operational (and emotional) complexity by simply trading in your old car for a new one.

Fewer organizations take this alternative approach of “tearing off the band-aid.” If the user experience is frictionless, more efficient, and enhances the ease of data protection, an organization’s highly motivated employee base will adapt much more easily.

Stayed tuned and stay updated

Stay tuned for more! In my next installments, I will cover the topics of process and technology, respectively, and their role in changing the security monolith. Technology on its own solves nothing. What good are building supplies and tools without a blueprint? Similarly, process is the orchestration of the effort, and is necessary to enhance an organization’s cybersecurity, privacy, compliance, and productivity.

In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Changing the monolith—Part 2: Whose support do you need? appeared first on Microsoft Security.

Categories: Compliance, cybersecurity Tags:

Introducing Microsoft Application Inspector

January 16th, 2020 No comments

Modern software development practices often involve building applications from hundreds of existing components, whether they’re written by another team in your organization, an external vendor, or someone in the open source community. Reuse has great benefits, including time-to-market, quality, and interoperability, but sometimes brings the cost of hidden complexity and risk.

You trust your engineering team, but the code they write often accounts for only a tiny fraction of the entire application. How well do you understand what all those external software components actually do? You may find that you’re placing as much trust in each of the thousands of contributors to those components as you have in your in-house engineering team.

At Microsoft, our software engineers use open source software to provide our customers high-quality software and services. Recognizing the inherent risks in trusting open source software, we created a source code analyzer called Microsoft Application Inspector to identify “interesting” features and metadata, like the use of cryptography, connecting to a remote entity, and the platforms it runs on.

Application Inspector differs from more typical static analysis tools in that it isn’t limited to detecting poor programming practices; rather, it surfaces interesting characteristics in the code that would otherwise be time-consuming or difficult to identify through manual introspection. It then simply reports what’s there, without judgement.

For example, consider this snippet of Python source code:

Here we can see that a program that downloads content from a URL, writes it to the file system, and then executes a shell command to list details of that file. If we run this code through Application Inspector, we’ll see the following features identified which tells us a lot about what it can do:

  • FileOperation.Write
  • Network.Connection.Http
  • Process.DynamicExecution

In this small example, it would be trivial to examine the snippet manually to identify those same features, but many components contain tens of thousands of lines of code, and modern web applications often use hundreds of such components. Application Inspector is designed to be used individually or at scale and can analyze millions of lines of source code from components built using many different programming languages. It’s simply infeasible to attempt to do this manually.

Application Inspector is positioned to help in key scenarios

We use Application Inspector to identify key changes to a component’s feature set over time (version to version), which can indicate anything from an increased attack surface to a malicious backdoor. We also use the tool to identify high-risk components and those with unexpected features that require additional scrutiny, under the theory that a vulnerability in a component that is involved in cryptography, authentication, or deserialization would likely have higher impact than others.

Using Application Inspector

Application Inspector is a cross-platform, command-line tool that can produce output in multiple formats, including JSON and interactive HTML. Here is an example of an HTML report:

Each icon in the report above represents a feature that was identified in the source code. That feature is expanded on the right-hand side of the report, and by clicking any of the links, you can view the source code snippets that contributed to that identification.

Each feature is also broken down into more specific categories and an associated confidence, which can be accessed by expanding the row.

Application Inspector comes with hundreds of feature detection patterns covering many popular programming languages, with good support for the following types of characteristics:

  • Application frameworks (development, testing)
  • Cloud / Service APIs (Microsoft Azure, Amazon AWS, and Google Cloud Platform)
  • Cryptography (symmetric, asymmetric, hashing, and TLS)
  • Data types (sensitive, personally identifiable information)
  • Operating system functions (platform identification, file system, registry, and user accounts)
  • Security features (authentication and authorization)

Get started with Application Inspector

Application Inspector can identify interesting features in source code, enabling you to better understand the software components that your applications use. Application Inspector is open source, cross-platform (.NET Core), and can be downloaded at github.com/Microsoft/ApplicationInspector. We welcome all contributions and feedback.

The post Introducing Microsoft Application Inspector appeared first on Microsoft Security.

Categories: Automation, Security Intelligence Tags:

Security baseline (FINAL) for Chromium-based Microsoft Edge, version 79

January 15th, 2020 No comments

Microsoft is pleased to announce the enterprise-ready release of the recommended security configuration baseline settings for the next version of Microsoft Edge based on Chromium, version 79. The settings recommended in this baseline are identical to the ones we recommended in the version 79 draft, minus one setting that we have removed and that we discuss below. We continue to welcome feedback through the Baselines Discussion site.


 


The baseline package is now available as part of the Security Compliance Toolkit. Like all our baseline packages, the downloadable baseline package includes importable GPOs, a script to apply the GPOs to local policy, a script to import the GPOs into Active Directory Group Policy, and all the recommended settings in spreadsheet form, as Policy Analyzer rules, and as GP Reports.


 


Microsoft Edge is being rebuilt with the open-source Chromium project, and many of its security configuration options are inherited from that project. These Group Policy settings are entirely distinct from those for the original version of Microsoft Edge built into Windows 10: they are in different folders in the Group Policy editor and they reference different registry keys. The Group Policy settings that control the new version of Microsoft Edge are located under “Administrative Templates\Microsoft Edge,” while those that control the current version of Microsoft Edge remain located under “Administrative Templates\Windows Components\Microsoft Edge.” You can download the latest policy templates for the new version of Microsoft Edge from the Microsoft Edge Enterprise landing page. To learn more about managing the new version of Microsoft Edge, see Configure Microsoft Edge for Windows.


 


As with our current Windows and Office security baselines, our recommendations for Microsoft Edge configuration follow a streamlined and efficient approach to baseline definition when compared with the baselines we published before Windows 10. The foundation of that approach is essentially this:



  • The baselines are designed for well-managed, security-conscious organizations in which standard end users do not have administrative rights.

  • A baseline enforces a setting only if it mitigates a contemporary security threat and does not cause operational issues that are worse than the risks they mitigate.

  • A baseline enforces a default only if it is otherwise likely to be set to an insecure state by an authorized user:

    • If a non-administrator can set an insecure state, enforce the default.

    • If setting an insecure state requires administrative rights, enforce the default only if it is likely that a misinformed administrator will otherwise choose poorly.




(For further explanation, see the “Why aren’t we enforcing more defaults?” section in this blog post.)


 


Version 79 of the Chromium-based version of Microsoft Edge has 216 enforceable Computer Configuration policy settings and another 200 User Configuration policy settings. Following our streamlined approach, our recommended baseline configures a grand total of eleven Group Policy settings. You can find full documentation in the download package’s Documentation subdirectory.


 


The one difference between this baseline and the version 79 draft is that we have removed the recommendation to disable “Force Microsoft Defender SmartScreen checks on downloads from trusted sources.” By default, SmartScreen will perform these checks. While performing checks on files from trusted sources increases the likelihood of false positives – particularly from intranet sources that host files that are seldom if ever seen in the outside world – we have decided not to apply that decision to all customers adopting our baseline. Depending on who can store files in locations that are considered “trusted sources” and the rigor they apply to restricting what gets stored there, internal sites might in fact end up hosting untrustworthy content that should be checked. Our baseline therefore neither enables nor disables the setting. Organizations choosing to disable this setting can therefore do so without contradicting our baseline recommendations.

Categories: Uncategorized Tags:

Announcing MSRC 2019 Q4 Security Researcher Leaderboard

January 15th, 2020 No comments

Following the first Security Researcher Quarterly Leaderboard we published in October 2019, we are excited to announce the MSRC Q4 2019 Security Researcher Leaderboard, which shows the top contributing researchers for the last quarter. In each quarterly leaderboard, we recognize the security researchers who ranked at or above the 95th percentile line based on the …

Announcing MSRC 2019 Q4 Security Researcher Leaderboard Read More »

The post Announcing MSRC 2019 Q4 Security Researcher Leaderboard appeared first on Microsoft Security Response Center.

How to implement Multi-Factor Authentication (MFA)

January 15th, 2020 No comments

Another day, another data breach. If the regular drumbeat of leaked and phished accounts hasn’t persuaded you to switch to Multi-Factor Authentication (MFA) already, maybe the usual January rush of ‘back to work’ password reset requests is making you reconsider. When such an effective option for protecting accounts is available, why wouldn’t you deploy it straight away?

The problem is that deploying MFA at scale is not always straightforward. There are technical issues that may hold you up, but the people side is where you have to start. The eventual goal of an MFA implementation is to enable it for all your users on all of your systems all of the time, but you won’t be able to do that on day one.

To successfully roll out MFA, start by being clear about what you’re going to protect, decide what MFA technology you’re going to use, and understand what the impact on employees is going to be. Otherwise, your MFA deployment might grind to a halt amid complaints from users who run into problems while trying to get their job done.

Before you start on the technical side, remember that delivering MFA across a business is a job for the entire organization, from the security team to business stakeholders to IT departments to HR and to corporate communications and beyond, because it has to support all the business applications, systems, networks and processes without affecting workflow.

Campaign and train

Treat the transition to MFA like a marketing campaign where you need to sell employees on the idea—as well as provide training opportunities along the way. It’s important for staff to understand that MFA is there to support them and protect their accounts and all the their data, because that may not be their first thought when met with changes to the way they sign in to the tools they use every day. If you run an effective internal communications campaign that makes it clear to users what they need to do and, more importantly, why they need to do it, you’ll avoid them seeing MFA as a nuisance or misunderstanding it as ‘big brother’ company tracking.

The key is focusing on awareness: in addition to sending emails—put up posters in the elevator, hang banner ads in your buildings, all explaining why you’re making the transition to MFA. Focus on informing your users, explaining why you’re making this change—making it very clear what they will need to do and where they can find instructions, documentation, and support.

Also, provide FAQs and training videos, along with optional training sessions or opportunities to opt in to an early pilot group (especially if you can offer them early access to a new software version that will give them features they need). Recognize that MFA is more work for them than just using a password, and that they will very likely be inconvenienced. Unless you are able to use biometrics on every device they will have to get used to carrying a security key or a device with an authenticator app with them all the time, so you need them to understand why MFA is so important.

It’s not surprising that users can be concerned about a move to MFA. After all, MFA has sometimes been done badly in the consumer space. They’ll have seen stories about social networks abusing phone numbers entered for security purposes for marketing or of users locked out of their accounts if they’re travelling and unable to get a text message. You’ll need to reassure users who have had bad experiences with consumer MFA and be open to feedback from employees about the impact of MFA policies. Like all tech rollouts, this is a process.

If you’re part of an international business you have more to do, as you need to account for global operations. That needs wider buy-in and a bigger budget, including language support if you must translate training and support documentation. If you don’t know where to start, Microsoft provides communication templates and user documentation you can customize for your organization.

Start with admin accounts

At a minimum, you want to use MFA for all your admins, so start with privileged users. Administrative accounts are your highest value targets and the most urgent to secure, but you can also treat them as a proof of concept for wider adoption. Review who these users are and what privileges they have—there are probably more accounts than you expect with far more privileges than are really needed.

At the same time, look at key business roles where losing access to email—or having unauthorized emails sent—will have a major security impact. Your CEO, CFO, and other senior leaders need to move to MFA to protect business communications.

Use what you’ve learned to roll out MFA to high value groups to plan a pilot deployment—which includes employees from across the business who require different levels of security access—so your final MFA deployment is optimized for mainstream employees without hampering the productivity of those working with more sensitive information, whether that’s the finance team handling payroll or developers with commit rights. Consider how you will cover contractors and partners who need access as well.

Plan for wider deployment

Start by looking at what systems you have that users need to sign in to that you can secure with MFA. Remember that includes on-premises systems—you can incorporate MFA into your existing remote access options, using Active Directory Federation Services (AD FS), or Network Policy Server and use Azure Active Directory (Azure AD) Application Proxy to publish applications for cloud access.

Concentrate on finding any networks or systems where deploying MFA will take more work (for example, if SAML authentication is used) and especially on discovering vulnerable apps that don’t support anything except passwords because they use legacy or basic authentication. This includes older email systems using MAPI, EWS, IMAP4, POP3, SMTP, internal line of business applications, and elderly client applications. Upgrade or update these to support modern authentication and MFA where you can. Where this isn’t possible, you’ll need to restrict them to use on the corporate network until you can replace them, because critical systems that use legacy authentication will block your MFA deployment.

Be prepared to choose which applications to prioritize. As well as an inventory of applications and networks (including remote access options), look at processes like employee onboarding and approval of new applications. Test how applications work with MFA, even when you expect the impact to be minimal. Create a new user without admin access, use that account to sign in with MFA and go through the process of configuring and using the standard set of applications staff will use to see if there are issues. Look at how users will register for MFA and choose which methods and factors to use, and how you will track and audit registrations. You may be able to combine MFA registration with self-service password reset (SSPR) in a ‘one stop shop,’ but it’s important to get users to register quickly so that attackers can’t take over their account by registering for MFA, especially if it’s for a high-value application they don’t use frequently. For new employees, you should make MFA registration part of the onboarding process.

Make MFA easier on employees

MFA is always going to be an extra step, but you can choose MFA options with less friction, like using biometrics in devices or FIDO2 compliant factors such as Feitan or Yubico security keys. Avoid using SMS if possible. Phone-based authentication apps like the Microsoft Authenticator App are an option, and they don’t require a user to hand over control of their personal device. But if you have employees who travel to locations where they may not have connectivity, choose OATH verification codes, which are automatically generated rather than push notifications that are usually convenient but require the user to be online. You can even use automated voice calls: letting users press a button on the phone keypad is less intrusive than giving them a passcode to type in on screen.

Offer a choice of alternative factors so people can pick the one that best suits them. Biometrics are extremely convenient, but some employees may be uncomfortable using their fingerprint or face for corporate sign-ins and may prefer receiving an automated voice call.

Make sure that you include mobile devices in your MFA solution, managing them through Mobile Device Management (MDM), so you can use conditional and contextual factors for additional security.

Avoid making MFA onerous; choose when the extra authentication is needed to protect sensitive data and critical systems rather than applying it to every single interaction. Consider using conditional access policies and Azure AD Identity Protection, which allows for triggering two-step verification based on risk detections, as well as pass-through authentication and single-sign-on (SSO).

If MFA means that a user accessing a non-critical file share or calendar on the corporate network from a known device that has all the current OS and antimalware updates sees fewer challenges—and no longer faces the burden of 90-day password resets—then you can actually improve the user experience with MFA.

Have a support plan

Spend some time planning how you will handle failed sign-ins and account lockouts. Even with training, some failed sign-ins will be legitimate users getting it wrong and you need to make it easy for them to get help.

Similarly, have a plan for lost devices. If a security key is lost, the process for reporting that needs to be easy and blame free, so that employees will notify you immediately so you can expire their sessions and block the security key, and audit the behavior of their account (going back to before they notified you of the loss). Security keys that use biometrics may be a little more expensive, but if they’re lost or stolen, an attacker can’t use them. If possible, make it a simple, automated workflow, using your service desk tools.

You also need to quickly get them connected another way so they can get back to work. Automatically enrolling employees with a second factor can help. Make that second factor convenient enough to use that they’re not unable to do their job, but not so convenient that they keep using it and don’t report the loss: one easy option is allowing one-time bypasses. Similarly, make sure you’re set up to automatically deprovision entitlements and factors when employees change roles or leave the organization.

Measure and monitor

As you deploy MFA, monitor the rollout to see what impact it has on both security and productivity and be prepared to make changes to policies or invest in better hardware to make it successful. Track security metrics for failed login attempts, credential phishing that gets blocked and privilege escalations that are denied.

Your MFA marketing campaign also needs to continue during and after deployment, actively reaching out to staff and asking them to take back in polls or feedback sessions. Start that with the pilot group and continue it once everyone is using MFA.

Even when you ask for it, don’t rely on user feedback to tell you about problems. Check helpdesk tickets, logs, and audit options to see if it’s taking users longer to get into systems, or if they’re postponing key tasks because they’re finding MFA difficult, or if security devices are failing or breaking more than expected. New applications and new teams in the business will also mean that MFA deployment needs to be ongoing, and you’ll need to test software updates to see if they break MFA; you have to make it part of the regular IT process.

Continue to educate users about the importance of MFA, including running phishing training and phishing your own employees (with more training for those who are tricked into clicking through to fake links).

MFA isn’t a switch you flip; it’s part of a move to continuous security and assessment that will take time and commitment to implement. But if you approach it in the right way, it’s also the single most effective step you can take to improve security.

About the authors

Ann Johnson is the Corporate Vice President for Cybersecurity Solutions Group for Microsoft. She is a member of the board of advisors for FS-ISAC (The Financial Services Information Sharing and Analysis Center), an advisory board member for EWF (Executive Women’s Forum on Information Security, Risk Management & Privacy), and an advisory board member for HYPR Corp. Ann recently joined the board of advisors for Cybersecurity Ventures

Christina Morillo is a Senior Program Manager on the Azure Identity Engineering Product team at Microsoft. She is an information security and technology professional with a background in cloud technologies, enterprise security, and identity and access. Christina advocates and is passionate about making technology less scary and more approachable for the masses. When she is not at work, or spending time with her family, you can find her co-leading Women in Security and Privacy’s NYC chapter and supporting others as an advisor and mentor. She lives in New York City with her husband and children.

Learn more

To find out more about Microsoft’s Cybersecurity Solutions, visit the Microsoft Security site, or follow Microsoft Security on Twitter at Microsoft Security Twitter or Microsoft WDSecurity Twitter.

To learn more about Microsoft Azure Identity Management solutions, visit this Microsoft overview page and follow our Identity blog. You can also follow us @AzureAD on Twitter.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How to implement Multi-Factor Authentication (MFA) appeared first on Microsoft Security.