Archive

Author Archive

Understanding the performance impact of Spectre and Meltdown mitigations on Windows Systems

January 9th, 2018 No comments

Last week the technology industry and many of our customers learned of new vulnerabilities in the hardware chips that power phones, PCs and servers. We (and others in the industry) had learned of this vulnerability under nondisclosure agreement several months ago and immediately began developing engineering mitigations and updating our cloud infrastructure. In this blog, Ill describe the discovered vulnerabilities as clearly as I can, discuss what customers can do to help keep themselves safe, and share what weve learned so far about performance impacts.

What Are the New Vulnerabilities?

On Wednesday, Jan. 3, security researchers publicly detailed three potential vulnerabilities named Meltdown and Spectre. Several blogs have tried to explain these vulnerabilities further a clear description can be found via Stratechery.

On a phone or a PC, this means malicious software could exploit the silicon vulnerability to access information in one software program from another. These attacks extend into browsers where malicious JavaScript deployed through a webpage or advertisement could access information (such as a legal document or financial information) across the system in another running software program or browser tab. In an environment where multiple servers are sharing capabilities (such as exists in some cloud services configurations), these vulnerabilities could mean it is possible for someone to access information in one virtual machine from another.

What Steps Should I Take to Help Protect My System?

Currently three exploits have been demonstrated as technically possible. In partnership with our silicon partners, we have mitigated those through changes to Windows and silicon microcode.

Exploited Vulnerability CVE Exploit
Name
Public Vulnerability Name Windows Changes Silicon Microcode Update ALSO Required on Host
Spectre 2017-5753 Variant 1 Bounds Check Bypass Compiler change; recompiled binaries now part of Windows Updates

Edge & IE11 hardened to prevent exploit from JavaScript

No
Spectre 2017-5715 Variant 2 Branch Target Injection Calling new CPU instructions to eliminate branch speculation in risky situations Yes
Meltdown 2017-5754 Variant 3 Rogue Data Cache Load Isolate kernel and user mode page tables No

 

Because Windows clients interact with untrusted code in many ways, including browsing webpages with advertisements and downloading apps, our recommendation is to protect all systems with Windows Updates and silicon microcode updates.

For Windows Server, administrators should ensure they have mitigations in place at the physical server level to ensure they can isolate virtualized workloads running on the server. For on-premises servers, this can be done by applying the appropriate microcode update to the physical server, and if you are running using Hyper-V updating it using our recent Windows Update release. If you are running on Azure, you do not need to take any steps to achieve virtualized isolation as we have already applied infrastructure updates to all servers in Azure that ensure your workloads are isolated from other customers running in our cloud. This means that other customers running on Azure cannot attack your VMs or applications using these vulnerabilities.

Windows Server customers, running either on-premises or in the cloud, also need to evaluate whether to apply additional security mitigations within each of their Windows Server VM guest or physical instances. These mitigations are needed when you are running untrusted code within your Windows Server instances (for example, you allow one of your customers to upload a binary or code snippet that you then run within your Windows Server instance) and you want to isolate the application binary or code to ensure it cant access memory within the Windows Server instance that it should not have access to. You do not need to apply these mitigations to isolate your Windows Server VMs from other VMs on a virtualized server, as they are instead only needed to isolate untrusted code running within a specific Windows Server instance.

We currently support 45 editions of Windows. Patches for 41 of them are available now through Windows Update. We expect the remaining editions to be patched soon. We are maintaining a table of editions and update schedule in our Windows customer guidance article.

Silicon microcode is distributed by the silicon vendor to the system OEM, which then decides to release it to customers. Some system OEMs use Windows Update to distribute such microcode, others use their own update systems. We are maintaining a table of system microcode update information here. Surface will be updated through Windows Update starting today.

 

Guidance on how to check and enable or disable these mitigations can be found here:

Performance

One of the questions for all these fixes is the impact they could have on the performance of both PCs and servers. It is important to note that many of the benchmarks published so far do not include both OS and silicon updates. Were performing our own sets of benchmarks and will publish them when complete, but I also want to note that we are simultaneously working on further refining our work to tune performance. In general, our experience is that Variant 1 and Variant 3 mitigations have minimal performance impact, while Variant 2 remediation, including OS and microcode, has a performance impact.

Here is the summary of what we have found so far:

  • With Windows 10 on newer silicon (2016-era PCs with Skylake, Kabylake or newer CPU), benchmarks show single-digit slowdowns, but we dont expect most users to notice a change because these percentages are reflected in milliseconds.
  • With Windows 10 on older silicon (2015-era PCs with Haswell or older CPU), some benchmarks show more significant slowdowns, and we expect that some users will notice a decrease in system performance.
  • With Windows 8 and Windows 7 on older silicon (2015-era PCs with Haswell or older CPU), we expect most users to notice a decrease in system performance.
  • Windows Server on any silicon, especially in any IO-intensive application, shows a more significant performance impact when you enable the mitigations to isolate untrusted code within a Windows Server instance. This is why you want to be careful to evaluate the risk of untrusted code for each Windows Server instance, and balance the security versus performance tradeoff for your environment.

For context, on newer CPUs such as on Skylake and beyond, Intel has refined the instructions used to disable branch speculation to be more specific to indirect branches, reducing the overall performance penalty of the Spectre mitigation. Older versions of Windows have a larger performance impact because Windows 7 and Windows 8 have more user-kernel transitions because of legacy design decisions, such as all font rendering taking place in the kernel. We will publish data on benchmark performance in the weeks ahead.

Conclusion

As you can tell, there is a lot to this topic of side-channel attack methods. A new exploit like this requires our entire industry to work together to find the best possible solutions for our customers. The security of the systems our customers depend upon and enjoy is a top priority for us. Were also committed to being as transparent and factual as possible to help our customers make the best possible decisions for their devices and the systems that run organizations around the world. Thats why weve chosen to provide more context and information today and why we released updates and remediations as quickly as we could on Jan. 3. Our commitment to delivering the technology you depend upon, and in optimizing performance where we can, continues around the clock and we will continue to communicate as we learn more.

-Terry

Categories: Uncategorized Tags:

Application fuzzing in the era of Machine Learning and AI

January 3rd, 2018 No comments

Proactively testing software for bugs is not new. The earliest examples date back to the 1950s with the term fuzzing. Fuzzing as we now refer to it is the injection of random inputs and commands into applications. It made its debut quite literally on a dark and stormy night in 1988. Since then, application fuzzing has become a staple of the secure software development lifecycle (SDLC), and according to Gartner*, security testing is growing faster than any other security market, as AST solutions adapt to new development methodologies and increased application complexity.

We believe there is good reason for this. The overall security risk profile of applications has grown in lockstep with accelerated software development and application complexity. Hackers are also aware of the increased vulnerabilities and, as the recent Equifax breach highlights, the application layer is highly targeted. Despite this, the security and development groups within organizations cannot find easy alignment to implement application fuzzing.

While DevOps is transforming the speed at which applications are created, tested, and integrated with IT, that same efficiency hampers the ability to mitigate identified security risks and vulnerabilities, without impacting business priorities. This is exactly the promise that machine learning, artificial intelligence (AI), and the use of deep neural networks (DNN) are expected to deliver on in evolved software vulnerability testing.

Most customers I talk to see AI as a natural next step given that most software testing for bugs and vulnerabilities is either manual or prone to false positives. With practically every security product claiming to be machine learning and AI-enabled, it can be hard to understand which offerings can deliver real value over current approaches.

Adoption of the latest techniques for application security testing doesnt mean CISOs must become experts in machine learning. Companies like Microsoft are using the on-demand storage and computing power of the cloud, combined with experience in software development and data science, to build security vulnerability mitigation tools that embed this expertise in existing systems for developing, testing, and releasing code. It is important, however, to understand your existing environment, application inventory, and testing methodologies to capture tangible savings in cost and time. For many organizations, application testing relies on tools that use business logic and common coding techniques. These are notoriously error-prone and devoid of security expertise. For this latter reason, some firms turn to penetration testing experts and professional services. This can be a costly, manual approach to mitigation that lengthens software shipping cycles.

Use cases

Modern application security testing that is continuous and integrated with DevOps and SecOps can be transformative for business agility and security risk management. Consider these key use cases and whether your organization has embedded application security testing for each:

  • Digital Transformation moving applications to the cloud creates the need to re-establish security controls and monitoring. Fuzzing can uncover errors and missed opportunities to shore up defenses. Automated and integrated fuzzing can further preserve expedited software shipping cycles and business agility.
  • Securing the Supply Chain Open Source Software (OSS) and 3rd party applications are a common vector of attack, as we saw with Petya, so a testing regimen is a core part of a plan to manage 3rd party risk.
  • Risk Detection whether building, maintaining, or refactoring applications on premises, the process and risk profile have become highly dynamic.Organizations need to be proactive to uncover bugs, holes and configuration errors on a continuous basis to meet both internal and regulatory risk management mandates.

Platform leverage

Of course, software development and testing are about more than just tools. The process to communicate risks to all stakeholders, and to act, is where the real benefit materializes. A barrier to effective application security testing is the highly siloed way that testing and remediation are conducted. Development waits for IT and security professionals to implement the changesslowing deployment and time to market. Legacy application security testing is ready for disruption and the built-in approach can deliver long-awaited efficiency in the development and deployment pipeline. Digital transformation, supply chain security, and risk detection all benefit from speed and agility. Lets consider the DevOps and SecOps workflows possible on a Microsoft-based application security testing framework:

  • DevOps Continuous fuzzing built into the DevOps pipeline identifies bugs and feeds them to the continuous integration and deployment environment (i.e. Visual Studio Team Services and Team Foundation Server). Developers and stakeholders are uniformly advised of risky code and provided the option of running additional Azure-based fuzzing techniques. For apps in production that are found to be running risky code, IT pros can mitigate risks by using PowerShell and Group Policy (GPO) to enable the features of Windows Defender Exploit Guard. While the apps continue to run, the attack surface can be reduced, and connection scenarios which increase risk are blocked. This gives teams time to develop and implement mitigations without having to take the applications entirely offline.
  • SecOps – Azure-hosted containers and VMs, as well as on-premise machines, are scanned for risky applications and code including OSS. The results inform Microsofts various desktop, mobile, and server threat protection regimes, including application whitelisting. Endpoints can be scanned for the presence of the risky code and administrators are informed through Azure Security Center. Mitigations can also be deployed to block those applications implicated and enforce conditional access through Azure Active Directory.

Cloud and AI

Machine learning and artificial intelligence are not new, but the relatively recent availability of graphics processing units (GPUs) have brought their potential to mainstream by enabling faster (parallel) processing of large amounts of data. Our recently announced Microsoft Risk Detection (MSRD) service is a showcase of the power of the cloud and AI to evolve fuzz testing. In fact, Microsofts award winning work in a specialized area of AI called constraint solving has been 10 years in the making and was used to produce the worlds first white-box fuzzer.

A key to effective application security testing is the inputs or seeds used to establish code paths and bring about crashes and bug discovery. These inputs can be static and predetermined, or in the case of MSRD, dynamic and mutated by training algorithms to generate relevant variations based on previous runs. While AI and constraint solving are used to tune the reasoning for finding bugs, Azure Resource Manager dynamically scales the required compute up or down creating a fuzzing lab that is right-sized for the customers requirement. The Azure based approach also gives customers choices in running multiple fuzzers, in addition to Microsofts own, so the customer gets value from several different methods of fuzzing.

The future

For Microsoft, application security testing is fundamental to a secure digital transformation. MSRD for Windows and Linux workloads is yet another example of our commitment to building security into every aspect of our platform. While our AI-based application fuzzing is unique, Microsoft Research is already upping the ante with a new project for neural fuzzing. Deep neural networks are an instantiation of machine learning that model the human brain. Their application can improve how MSRD identifies fuzzing locations and the strategies and parameters used. Integration with our security offerings is in the initial phases, and by folding in more capabilities over time we remove the walls between IT, developers, and security, making near real-time risk mitigation a reality. This is the kind of disruption that, as a platform company, Microsoft uniquely brings to application security testing for our customers and serves as further testament for the power of built-in.


* Gartner: Magic Quadrant for Application Security Testing published: 28 February 2017 ID: G00290926

Categories: Uncategorized Tags:

How public-private partnerships can combat cyber adversaries

December 13th, 2017 No comments

For several years now, policymakers and practitioners from governments, CERTs, and the security industry have been speaking about the importance of public-private partnerships as an essential part of combating cyber threats. It is impossible to attend a security conference without a keynote presenter talking about it. In fact, these conferences increasingly include sessions or entire tracks dedicated to the topic. During the three conferences Ive attended since Junetwo US Department of Defense symposia, and NATOs annual Information Symposium in Belgium, the message has been consistent: public-private information-sharing is crucial to combat cyber adversaries and protect users and systems.

Unfortunately, we stink at it. Information-sharing is the Charlie Brown football of cyber: we keep running toward it only to fall flat on our backs as attackers continually pursue us. Just wait til next year. Its become easier to talk about the need to improve information-sharing than to actually make it work, and its now the technology industrys convenient crutch. Why? Because no one owns it, so no one is accountable. I suspect we each have our own definition of what information-sharing means, and of what success looks like. Without a sharp vision, can we really expect it to happen?

So, what can be done?

First, some good news: the security industry wants to do this–to partner with governments and CERTs. So, when we talk about it at conferences, or when a humble security advisor in Redmond blogs about it, its because we are committed to finding a solution. Microsoft recently hosted BlueHat, where hundreds of malware hunters, threat analysts, reverse engineers, and product developers from the industry put aside competitive priorities to exchange ideas and build partnerships. In my ten years with Microsoft, Ive directly participated in and led information-sharing initiatives that we established for the very purpose of advancing information assurance and protecting cyberspace. In fact, in 2013, Microsoft created a single legal and programmatic framework to address this issue, the Government Security Program.

For the partnership to work, it is important to understand and anticipate the requirements and needs of government agencies. For example, we need to consider cyber threat information, YARA rules, attacker campaign details, IP address, host, network traffic, and the like.

What can governments and CERTs do to better partner with industry?

  • Be flexible, especially on the terms. Communicate. Prioritize. In my experience, the mean-time-to-signature for a government to negotiate an info-sharing agreement with Microsoft is between six months and THREE YEARS.
  • Prioritize information sharing. If this is already a priority, close the gap. I fear governments attorneys are not sufficiently aware of how important the agreements are to their constituents. The information-sharing agreements may well be non-traditional agreements, but if information-sharing is truly a priority, lets standardize and expedite the agreements. Start by reading the 6 Nov Department of Homeland Security OIG report, DHS Can Improve Cyber Threat Information-Sharing document.
  • Develop and share with industry partners a plan to show how government agencies will consume and use our data. Let industry help government and CERTs improve our collective ROI. Before asking for data, lets ensure it will be impactful.
  • Develop KPIs to measure whether an information-sharing initiative is making a difference, quantitative or qualitative. In industry, we could do a better job at this, as we generally assume that were providing information for the right reason. However, I frequently question whether our efforts make a real difference. Whether we look for mean-time-to-detection improvements or other metrics, this is an area for improvement.
  • Commit to feedback. Public-private information-sharing implies two-way communication. Understand that more companies are making feedback a criterion to justify continuing investment in these not-for-profit engagements. Feedback helps us justify up the chain the efficacy of efforts that we know are important. It also improves two-way trust and contributes to a virtuous cycle of more and closer information-sharing. At Microsoft, we require structured feedback as the price of entry for a few of our programs.
  • Balance interests in understanding todays and tomorrows threats with an equal commitment to lock down what is currently owned.(My favorite) Information-sharing usually includes going after threat actors and understanding whats coming next. Thats important, but in an assume compromise environment, we need to continue to hammer on the basics:

    • Patch.If an integrator or on-site provider indicates patching and upgrading will break an application, and if that is used as an excuse not to patch, that is a problem. Authoritative third-parties such as US-CERT, SANS, and others recommend a 48- to 72-hour patch cycle. Review www.microsoft.com/secure to learn more.

      • Review www.microsoft.com/sdl to learn more about tackling this issue even earlier in the IT development cycle, and how to have important conversations with contractors, subcontractors,and ISVs in the software and services supply chain.

    • Reduce administrative privilege. This is especially important for contractor or vendor accounts. Up to 90 percent of breaches come from credential compromise. This is largely caused by a lack of, or obsolete, administrative, physical and technical controls to sensitive assets. Basic information-sharing demands that we focus on this. Here is guidance regarding securing access.

Ultimately, we in the industry can better serve governments and CERTs by incentivizing migrations to newer platforms which offer more built-in security; and that are more securely developed. As we think about improving information-sharing, lets be clear that this includes not only sharing technical details about threats and actors but also guidance on making governments fundamentally more secure on newer and more secure technologies.

 

Categories: Uncategorized Tags:

A decade inside Microsoft Security

November 9th, 2017 No comments

Ten years ago, I walked onto Microsofts Redmond campus to take a role on a team that partnered with governments and CERTs on cybersecurity. Id just left a meaningful career in US federal government service because I thought it would be fascinating to experience first-hand the security challenges and innovation from the perspective of the IT industry, especially within Microsoft, given its presence around the US federal government. I fully expected to spend a year or two in Microsoft and then resume my federal career with useful IT industry perspectives on security. Two days after I started, Popular Sciences annual Ten worst jobs in science survey came out, and I was surprised to see Microsoft Security Grunt in sixth place. Though the article was tongue-in-cheek, saluting those who take on tough challenges, the fact that we made this ignominious list certainly made me wonder if Id made a huge mistake.

I spent much of my first few years hearing from government and enterprise executives that Microsoft was part of the security problem. Working with so many hard-working engineers, researchers, security architects, threat hunters, and developers trying to tackle these increasingly complex challenges, I disagreed. But, we all recognized that we needed to do more to defend the ecosystem, and to better articulate our efforts. Wed been investing in security well before 2007, notably with the Trustworthy Computing Initiative and Security Development Lifecycle, and we continue to invest heavily in technologies and people – we now employ over 3,500 people in security across the company. I rarely hear anymore that we are perceived as a security liability, but our work isnt done. Ten years later, Im still here, busier than ever, delaying my long-expected return to federal service, helping enterprise CISOs secure their environments, their users, and their data.

Complexity vs. security

Is it possible, however, that our industrys investments in security have created another problem – that of complexity? Have we innovated our way into a more challenging situation? My fellow security advisors at Microsoft have shared customer frustrations over the growing security vendor presence in their environments. While these different technologies may solve specific requirements, in doing so, they create a management headache. Twice this week in Redmond, CISOs from large manufacturers challenged me to help them better understand security capabilities they already owned from Microsoft, but werent aware of. They sought to use this discovery process to identify opportunities to rationalize their security vendor presence. As one CISO said, Just help me simplify all of this.

There is a large ecosystem of very capable and innovative professionals delivering solutions into a vibrant and crowded security marketplace. With all of this IP, how can we best help CISOs use important innovation while reducing complexity in their environments? And, can we help them maximize value from their investments without sacrificing security and performance?

Best-of-suite capabilities

Large enterprises may employ up to 100 vendors technologies to handle different security functions. Different vendors may handle identity and access management, data loss prevention, key management, service management, cloud application security, and so on. Many companies are now turning to machine learning and user behavior technologies. Many claim best of breed or best in class, capabilities and there is impressive innovation in the marketplace. Recognizing this, we have made acquisition a part of Microsofts security strategy – since 2013 weve acquired companies like Aorato, Secure Islands, Adallom, and most recently Hexadite.

Microsofts experience as a large global enterprise is similar to our enterprise customers. Weve been working to rationalize the 100+ different security providers in our infrastructure to help us better manage our external dependencies and more efficiently manage budgets. Weve been moving toward a default policy of Microsoft first security technology where possible in our environment. Doing so helps us standardize on newer and familiar technologies that complement each other.

That said, whether we build or buy, our focus is to deliver an overall best in suite approach to help customers deploy, maintain, monitor, and protect our enterprise products and services as securely as possible. We are investing heavily in the Intelligent Security Graph. It leverages our vast security intelligence, connects and correlates information, and uses advanced analytics to help detect and respond to threats faster. If you are already working with Microsoft to advance your productivity and collaboration needs by deploying Windows 10, Office 365, Azure, or other core enterprise services, you should make better use of these investments and reduce dependency on third-party solutions by taking advantage of built-in monitoring and detection capabilities in these solutions. A best-of-suite approach also lowers the costs and complexity of administering a security program, e.g. making vendor assessments and procurement easier, reducing training and learning curves, and standardizing on common dashboards.

Reducing complexity also requires that we make our security technologies easy to acquire and use. Here are some interesting examples of how our various offerings connect to each other and have built-in capabilities:

  • The Windows Defender Advanced Threat Protection(ATP) offer seamlessly integrates with O365 ATP to provide more visibility into adversary activity against devices and mailboxes, and to give your security teams more control over these resources. Watch this great video to learn more about the services integration. Windows Defender ATP monitors behaviors on a device and sends alerts on suspicious activities. The console provides your security team with the ability to perform one-click actions such as isolating a machine, collecting a forensics package, and stopping and quarantining files. You can then track the kill chain into your O365 environment if a suspicious file on the device arrived via email. Once in O365 ATP, you can quarantine the email, detonate a potentially malicious payload, block the traffic from your environment, and identify other users who may have been targeted.
  • Azure Information Protection provides built-in capabilities to classify and label data, apply rights-management protections (that follows the data object) and gives data owners and admins visibility into, and control over, where that data goes and whether recipients attempt to violate policy.

Thousands of companies around the world are innovating, competing, and partnering to defeat adversaries and to secure the computing ecosystem. No single company can do it all. But by making it as convenient as possible for you to acquire and deploy technologies that integrate, communicate and complement each other, we believe we can offer a best-of-suite benefit to help secure users, devices, apps, data, and infrastructure. Visit https://www.microsoft.com/secure to learn about our solutions and reach out to your local Microsoft representative to learn more about compelling security technologies that you may already own. For additional information, and to stay on top of our investments in security, bookmark this Microsoft Secure blog.


Mark McIntyre, CISSP, is an Executive Security Advisor (ESA) in the Microsoft Enterprise and Cybersecurity Group. Mark works with global public sector and commercial enterprises, helping them transform their businesses while protecting data and assets by moving securely to the Cloud. As an ESA, Mark supports CISOs and their teams with cybersecurity reviews and planning. He also helps them understand Microsofts perspectives on the evolving cyber threat landscape and how Microsoft defends its enterprise, employees and users around the world.

Categories: Uncategorized Tags:

SSN for authentication is all wrong

October 23rd, 2017 No comments

Unless you were stranded on a deserted island or participating in a zen digital fast chances are youve heard plenty about the massive Equifax breach and the head-rolling fallout. In the flurry of headlines and advice about credit freezes an important part of the conversation was lost: if we didnt misuse our social security numbers, losing them wouldnt be a big deal. Let me explain: most people, and that mainly includes some pretty high-up identity experts that Ive met in my travels, dont understand the difference between identification and verification. In the real world, conflating those two points doesnt often have dire consequences. In the digital world, its a huge mistake that can lead to severe impacts.

Isnt it all just authentication you may ask? Well, yes, identification and verification are both parts of the authentication whole, but failure to understand the differences is where the mess comes in. However, one reason its so hard for many of us to separate identification and verification is that historically we havent had to. Think back to how humans authenticated to each other before the ability to travel long distances came into the picture. Our circle of acquaintances was pretty small and we knew each other by sight and sound. Just by looking at your neighbor, Bob, you could authenticate him. If you met a stranger, chances are someone else in the village knew the stranger and could vouch for her.

The ability to travel long distances changed the equation a bit. We developed documents that provided verification during the initiation phase, for example when you have to bring a birth certificate to the DMV to get your initial drivers license. And ongoing identification like a unique ID and a photo. These documents served as a single identification and verification mechanism. And that was great! Worked fine for years, until the digital age.

The digital age changed the model because rather than one person holding a single license with their photo on it, we had billions of people trying to authenticate to billions of systems with simple credentials like user name and password. And no friendly local villager to vouch for us.

Who are you? Prove it!

This is where the difference between the two really starts to matter. Identification answers the question: Who are you? Your name is an identifier. It could also be an alias, such as your unique employee ID number.

Do you want your name to be private? Imagine meeting another parent at your kids soccer game and refusing to tell them your name for security reasons. How about: Oh your new puppy is so adorable, whats her name? And you respond, If I told you, Id have to kill you. Or you try to find an address in a town with no street signs because the town is super security conscious. Ridiculous, right? Identifiers are public specifically so we can share them to help identify things.

We also want consistency in our identifiers. Imagine if that town had street signs, but changed the names of the streets every 24 hours for security reasons. And uniqueness, if every street had the same name, youd still have a heck of a time finding the right address wouldnt you?

Now that were clear on what the identifier is, we can enumerate a few aspects that make up a really good one:

  • Public
  • Unchanging
  • Unique

In a town or public road, we have a level of trust that the street sign is correct because the local authorities have governance over road signs. Back in our village, we trust Bob is Bob because we can verify him ourselves. But in the digital world, things get pretty tricky how do you verify someone or something youve never met before? Ask them to- Prove It!

We use these two aspects of authentication almost daily when we log into systems with a user ID (identification) and password (verification). How we verify in the real world can be public, unchanging, and unique because its very hard to forge a whole person. Or to switch all the street signs in a town. But verification online is trickier. We need to be able toprovide verification of who we are to a number of entities, many of whom arent great at protecting data. And if the same verification is re-used across entities, and one loses it, attackers could gain access to every site where it was used. This is why experts strongly recommend using unique passwords for every website/app. This goes for those challenge questions too. Which can lead to some fun calls with customer service, Oh, the town where I was born? Its: xja*21njaJK)`jjAQ^. At this point in time our fathers middle name, first pets name, town where we were born, school we went to and address history should be assumed public, using them as secrets for verification doesnt make sense anymore.

If one site loses your digital verification info, no worries. You only used it for that site and can create new info for the next one. What if you couldnt change your password ever? It was permanent and also got lost during the Yahoo! breach? And it was the one you use at your bank, and for your college and car loans, and your health insurance? How would you feel?

So, with that in mind, youd probably agree that the best digital verifiers are:

  • Private
  • Easily changed
  • Unique

Your turn

OK, now that you know the difference between identification and verification and the challenges of verification in a digital world, what do you think – Is your SSN a better identifier or verifier?

Categories: cybersecurity, Data Privacy, Tips & Talk Tags: